00:00:00.002 Started by upstream project "autotest-per-patch" build number 127108 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.107 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.108 The recommended git tool is: git 00:00:00.108 using credential 00000000-0000-0000-0000-000000000002 00:00:00.112 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.175 Fetching changes from the remote Git repository 00:00:00.177 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.230 Using shallow fetch with depth 1 00:00:00.230 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.230 > git --version # timeout=10 00:00:00.269 > git --version # 'git version 2.39.2' 00:00:00.269 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.307 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.307 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.643 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.659 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.675 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:06.675 > git config core.sparsecheckout # timeout=10 00:00:06.686 > git read-tree -mu HEAD # timeout=10 00:00:06.702 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:06.768 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:06.768 > git rev-list --no-walk f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=10 00:00:06.864 [Pipeline] Start of Pipeline 00:00:06.880 [Pipeline] library 00:00:06.882 Loading library shm_lib@master 00:00:06.883 Library shm_lib@master is cached. Copying from home. 00:00:06.899 [Pipeline] node 00:00:06.916 Running on WFP22 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.918 [Pipeline] { 00:00:06.927 [Pipeline] catchError 00:00:06.928 [Pipeline] { 00:00:06.938 [Pipeline] wrap 00:00:06.945 [Pipeline] { 00:00:06.951 [Pipeline] stage 00:00:06.952 [Pipeline] { (Prologue) 00:00:07.126 [Pipeline] sh 00:00:07.407 + logger -p user.info -t JENKINS-CI 00:00:07.427 [Pipeline] echo 00:00:07.429 Node: WFP22 00:00:07.437 [Pipeline] sh 00:00:07.736 [Pipeline] setCustomBuildProperty 00:00:07.751 [Pipeline] echo 00:00:07.753 Cleanup processes 00:00:07.758 [Pipeline] sh 00:00:08.043 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.043 2384860 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.058 [Pipeline] sh 00:00:08.342 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.342 ++ grep -v 'sudo pgrep' 00:00:08.342 ++ awk '{print $1}' 00:00:08.342 + sudo kill -9 00:00:08.342 + true 00:00:08.360 [Pipeline] cleanWs 00:00:08.371 [WS-CLEANUP] Deleting project workspace... 00:00:08.371 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.378 [WS-CLEANUP] done 00:00:08.383 [Pipeline] setCustomBuildProperty 00:00:08.401 [Pipeline] sh 00:00:08.684 + sudo git config --global --replace-all safe.directory '*' 00:00:08.778 [Pipeline] httpRequest 00:00:08.796 [Pipeline] echo 00:00:08.798 Sorcerer 10.211.164.101 is alive 00:00:08.809 [Pipeline] httpRequest 00:00:08.814 HttpMethod: GET 00:00:08.814 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:08.815 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:08.817 Response Code: HTTP/1.1 200 OK 00:00:08.818 Success: Status code 200 is in the accepted range: 200,404 00:00:08.818 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:09.598 [Pipeline] sh 00:00:09.882 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:09.899 [Pipeline] httpRequest 00:00:09.922 [Pipeline] echo 00:00:09.925 Sorcerer 10.211.164.101 is alive 00:00:09.932 [Pipeline] httpRequest 00:00:09.937 HttpMethod: GET 00:00:09.937 URL: http://10.211.164.101/packages/spdk_38b03952e12e8573906cdc00be8434c9e81d5975.tar.gz 00:00:09.938 Sending request to url: http://10.211.164.101/packages/spdk_38b03952e12e8573906cdc00be8434c9e81d5975.tar.gz 00:00:09.948 Response Code: HTTP/1.1 200 OK 00:00:09.949 Success: Status code 200 is in the accepted range: 200,404 00:00:09.950 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_38b03952e12e8573906cdc00be8434c9e81d5975.tar.gz 00:00:31.476 [Pipeline] sh 00:00:31.758 + tar --no-same-owner -xf spdk_38b03952e12e8573906cdc00be8434c9e81d5975.tar.gz 00:00:34.317 [Pipeline] sh 00:00:34.605 + git -C spdk log --oneline -n5 00:00:34.605 38b03952e bdev/compress: check pm path for creating compress bdev 00:00:34.605 8711e7e9b autotest: reduce accel tests runs with SPDK_TEST_ACCEL flag 00:00:34.605 50222f810 configure: don't exit on non Intel platforms 00:00:34.605 78cbcfdde test/scheduler: fix cpu mask for rpc governor tests 00:00:34.605 ba69d4678 event/scheduler: remove custom opts from static scheduler 00:00:34.618 [Pipeline] } 00:00:34.636 [Pipeline] // stage 00:00:34.646 [Pipeline] stage 00:00:34.648 [Pipeline] { (Prepare) 00:00:34.667 [Pipeline] writeFile 00:00:34.684 [Pipeline] sh 00:00:34.965 + logger -p user.info -t JENKINS-CI 00:00:34.978 [Pipeline] sh 00:00:35.261 + logger -p user.info -t JENKINS-CI 00:00:35.274 [Pipeline] sh 00:00:35.555 + cat autorun-spdk.conf 00:00:35.555 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:35.555 SPDK_TEST_NVMF=1 00:00:35.555 SPDK_TEST_NVME_CLI=1 00:00:35.555 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:35.555 SPDK_TEST_NVMF_NICS=e810 00:00:35.555 SPDK_TEST_VFIOUSER=1 00:00:35.555 SPDK_RUN_UBSAN=1 00:00:35.555 NET_TYPE=phy 00:00:35.562 RUN_NIGHTLY=0 00:00:35.567 [Pipeline] readFile 00:00:35.595 [Pipeline] withEnv 00:00:35.597 [Pipeline] { 00:00:35.610 [Pipeline] sh 00:00:35.932 + set -ex 00:00:35.932 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:35.932 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:35.932 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:35.932 ++ SPDK_TEST_NVMF=1 00:00:35.932 ++ SPDK_TEST_NVME_CLI=1 00:00:35.932 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:35.932 ++ SPDK_TEST_NVMF_NICS=e810 00:00:35.932 ++ SPDK_TEST_VFIOUSER=1 00:00:35.932 ++ SPDK_RUN_UBSAN=1 00:00:35.932 ++ NET_TYPE=phy 00:00:35.932 ++ RUN_NIGHTLY=0 00:00:35.932 + case $SPDK_TEST_NVMF_NICS in 00:00:35.932 + DRIVERS=ice 00:00:35.932 + [[ tcp == \r\d\m\a ]] 00:00:35.932 + [[ -n ice ]] 00:00:35.932 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:35.932 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:35.932 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:35.932 rmmod: ERROR: Module irdma is not currently loaded 00:00:35.932 rmmod: ERROR: Module i40iw is not currently loaded 00:00:35.932 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:35.932 + true 00:00:35.932 + for D in $DRIVERS 00:00:35.932 + sudo modprobe ice 00:00:35.932 + exit 0 00:00:35.942 [Pipeline] } 00:00:35.958 [Pipeline] // withEnv 00:00:35.964 [Pipeline] } 00:00:35.980 [Pipeline] // stage 00:00:35.989 [Pipeline] catchError 00:00:35.990 [Pipeline] { 00:00:36.003 [Pipeline] timeout 00:00:36.003 Timeout set to expire in 50 min 00:00:36.006 [Pipeline] { 00:00:36.020 [Pipeline] stage 00:00:36.021 [Pipeline] { (Tests) 00:00:36.035 [Pipeline] sh 00:00:36.321 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:36.321 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:36.321 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:36.321 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:36.321 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:36.321 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:36.321 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:36.321 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:36.321 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:36.321 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:36.321 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:36.321 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:36.321 + source /etc/os-release 00:00:36.321 ++ NAME='Fedora Linux' 00:00:36.321 ++ VERSION='38 (Cloud Edition)' 00:00:36.321 ++ ID=fedora 00:00:36.321 ++ VERSION_ID=38 00:00:36.321 ++ VERSION_CODENAME= 00:00:36.321 ++ PLATFORM_ID=platform:f38 00:00:36.321 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:36.321 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:36.321 ++ LOGO=fedora-logo-icon 00:00:36.321 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:36.321 ++ HOME_URL=https://fedoraproject.org/ 00:00:36.321 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:36.321 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:36.321 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:36.321 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:36.321 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:36.321 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:36.321 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:36.321 ++ SUPPORT_END=2024-05-14 00:00:36.321 ++ VARIANT='Cloud Edition' 00:00:36.321 ++ VARIANT_ID=cloud 00:00:36.321 + uname -a 00:00:36.321 Linux spdk-wfp-22 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:36.321 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:38.860 Hugepages 00:00:38.860 node hugesize free / total 00:00:38.860 node0 1048576kB 0 / 0 00:00:38.860 node0 2048kB 0 / 0 00:00:38.860 node1 1048576kB 0 / 0 00:00:38.860 node1 2048kB 0 / 0 00:00:38.860 00:00:38.860 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:38.860 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:38.860 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:38.860 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:38.860 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:38.860 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:38.860 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:38.860 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:38.860 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:38.860 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:38.860 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:38.860 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:38.860 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:38.860 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:38.860 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:38.860 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:38.860 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:38.860 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:38.860 + rm -f /tmp/spdk-ld-path 00:00:38.860 + source autorun-spdk.conf 00:00:38.860 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:38.860 ++ SPDK_TEST_NVMF=1 00:00:38.860 ++ SPDK_TEST_NVME_CLI=1 00:00:38.860 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:38.860 ++ SPDK_TEST_NVMF_NICS=e810 00:00:38.860 ++ SPDK_TEST_VFIOUSER=1 00:00:38.860 ++ SPDK_RUN_UBSAN=1 00:00:38.860 ++ NET_TYPE=phy 00:00:38.860 ++ RUN_NIGHTLY=0 00:00:38.860 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:38.860 + [[ -n '' ]] 00:00:38.860 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:38.860 + for M in /var/spdk/build-*-manifest.txt 00:00:38.860 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:38.860 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:38.860 + for M in /var/spdk/build-*-manifest.txt 00:00:38.860 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:38.860 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:38.860 ++ uname 00:00:38.860 + [[ Linux == \L\i\n\u\x ]] 00:00:38.860 + sudo dmesg -T 00:00:38.860 + sudo dmesg --clear 00:00:38.860 + dmesg_pid=2386299 00:00:38.860 + [[ Fedora Linux == FreeBSD ]] 00:00:38.860 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:38.860 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:38.860 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:38.860 + [[ -x /usr/src/fio-static/fio ]] 00:00:38.860 + export FIO_BIN=/usr/src/fio-static/fio 00:00:38.860 + FIO_BIN=/usr/src/fio-static/fio 00:00:38.860 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:38.860 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:38.860 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:38.860 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:38.860 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:38.860 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:38.860 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:38.860 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:38.860 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:38.860 + sudo dmesg -Tw 00:00:38.860 Test configuration: 00:00:38.860 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:38.860 SPDK_TEST_NVMF=1 00:00:38.860 SPDK_TEST_NVME_CLI=1 00:00:38.860 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:38.860 SPDK_TEST_NVMF_NICS=e810 00:00:38.860 SPDK_TEST_VFIOUSER=1 00:00:38.860 SPDK_RUN_UBSAN=1 00:00:38.860 NET_TYPE=phy 00:00:38.860 RUN_NIGHTLY=0 21:48:18 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:38.860 21:48:18 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:38.860 21:48:18 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:38.860 21:48:18 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:38.860 21:48:18 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:38.860 21:48:18 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:38.860 21:48:18 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:38.860 21:48:18 -- paths/export.sh@5 -- $ export PATH 00:00:38.860 21:48:18 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:38.860 21:48:18 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:38.860 21:48:18 -- common/autobuild_common.sh@447 -- $ date +%s 00:00:38.861 21:48:18 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721850498.XXXXXX 00:00:38.861 21:48:18 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721850498.akVoPZ 00:00:38.861 21:48:18 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:00:38.861 21:48:18 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:00:38.861 21:48:18 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:38.861 21:48:18 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:38.861 21:48:18 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:38.861 21:48:18 -- common/autobuild_common.sh@463 -- $ get_config_params 00:00:38.861 21:48:18 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:00:38.861 21:48:18 -- common/autotest_common.sh@10 -- $ set +x 00:00:38.861 21:48:18 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:38.861 21:48:18 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:00:38.861 21:48:18 -- pm/common@17 -- $ local monitor 00:00:38.861 21:48:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:38.861 21:48:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:38.861 21:48:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:38.861 21:48:18 -- pm/common@21 -- $ date +%s 00:00:38.861 21:48:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:38.861 21:48:18 -- pm/common@21 -- $ date +%s 00:00:38.861 21:48:18 -- pm/common@25 -- $ sleep 1 00:00:38.861 21:48:18 -- pm/common@21 -- $ date +%s 00:00:38.861 21:48:18 -- pm/common@21 -- $ date +%s 00:00:38.861 21:48:18 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721850498 00:00:38.861 21:48:18 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721850498 00:00:38.861 21:48:18 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721850498 00:00:38.861 21:48:18 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721850498 00:00:39.120 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721850498_collect-vmstat.pm.log 00:00:39.120 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721850498_collect-cpu-load.pm.log 00:00:39.120 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721850498_collect-cpu-temp.pm.log 00:00:39.120 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721850498_collect-bmc-pm.bmc.pm.log 00:00:40.059 21:48:19 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:00:40.059 21:48:19 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:40.059 21:48:19 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:40.059 21:48:19 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:40.059 21:48:19 -- spdk/autobuild.sh@16 -- $ date -u 00:00:40.059 Wed Jul 24 07:48:19 PM UTC 2024 00:00:40.059 21:48:19 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:40.059 v24.09-pre-312-g38b03952e 00:00:40.059 21:48:19 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:40.059 21:48:19 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:40.059 21:48:19 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:40.059 21:48:19 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:00:40.059 21:48:19 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:00:40.059 21:48:19 -- common/autotest_common.sh@10 -- $ set +x 00:00:40.059 ************************************ 00:00:40.059 START TEST ubsan 00:00:40.059 ************************************ 00:00:40.059 21:48:19 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:00:40.059 using ubsan 00:00:40.059 00:00:40.059 real 0m0.001s 00:00:40.059 user 0m0.000s 00:00:40.059 sys 0m0.000s 00:00:40.059 21:48:19 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:00:40.059 21:48:19 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:40.059 ************************************ 00:00:40.059 END TEST ubsan 00:00:40.059 ************************************ 00:00:40.060 21:48:19 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:40.060 21:48:19 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:40.060 21:48:19 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:40.060 21:48:19 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:40.060 21:48:19 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:40.060 21:48:19 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:40.060 21:48:19 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:40.060 21:48:19 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:40.060 21:48:19 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:40.319 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:40.319 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:40.579 Using 'verbs' RDMA provider 00:00:53.736 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:08.634 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:08.634 Creating mk/config.mk...done. 00:01:08.634 Creating mk/cc.flags.mk...done. 00:01:08.634 Type 'make' to build. 00:01:08.634 21:48:46 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:01:08.634 21:48:46 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:08.634 21:48:46 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:08.634 21:48:46 -- common/autotest_common.sh@10 -- $ set +x 00:01:08.634 ************************************ 00:01:08.634 START TEST make 00:01:08.634 ************************************ 00:01:08.634 21:48:46 make -- common/autotest_common.sh@1125 -- $ make -j112 00:01:08.634 make[1]: Nothing to be done for 'all'. 00:01:09.573 The Meson build system 00:01:09.573 Version: 1.3.1 00:01:09.573 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:09.573 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:09.573 Build type: native build 00:01:09.573 Project name: libvfio-user 00:01:09.573 Project version: 0.0.1 00:01:09.573 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:09.573 C linker for the host machine: cc ld.bfd 2.39-16 00:01:09.573 Host machine cpu family: x86_64 00:01:09.573 Host machine cpu: x86_64 00:01:09.573 Run-time dependency threads found: YES 00:01:09.573 Library dl found: YES 00:01:09.573 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:09.573 Run-time dependency json-c found: YES 0.17 00:01:09.573 Run-time dependency cmocka found: YES 1.1.7 00:01:09.573 Program pytest-3 found: NO 00:01:09.573 Program flake8 found: NO 00:01:09.573 Program misspell-fixer found: NO 00:01:09.573 Program restructuredtext-lint found: NO 00:01:09.573 Program valgrind found: YES (/usr/bin/valgrind) 00:01:09.573 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:09.573 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:09.573 Compiler for C supports arguments -Wwrite-strings: YES 00:01:09.573 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:09.573 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:09.573 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:09.573 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:09.573 Build targets in project: 8 00:01:09.573 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:09.573 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:09.573 00:01:09.573 libvfio-user 0.0.1 00:01:09.573 00:01:09.573 User defined options 00:01:09.573 buildtype : debug 00:01:09.573 default_library: shared 00:01:09.573 libdir : /usr/local/lib 00:01:09.573 00:01:09.573 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:10.140 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:10.140 [1/37] Compiling C object samples/null.p/null.c.o 00:01:10.140 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:10.140 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:10.140 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:10.140 [5/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:10.140 [6/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:10.140 [7/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:10.140 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:10.140 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:10.140 [10/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:10.140 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:10.140 [12/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:10.140 [13/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:10.140 [14/37] Compiling C object samples/server.p/server.c.o 00:01:10.140 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:10.140 [16/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:10.140 [17/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:10.140 [18/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:10.140 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:10.140 [20/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:10.140 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:10.140 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:10.140 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:10.140 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:10.140 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:10.140 [26/37] Compiling C object samples/client.p/client.c.o 00:01:10.140 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:10.140 [28/37] Linking target samples/client 00:01:10.399 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:01:10.399 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:10.399 [31/37] Linking target test/unit_tests 00:01:10.399 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:10.399 [33/37] Linking target samples/gpio-pci-idio-16 00:01:10.399 [34/37] Linking target samples/lspci 00:01:10.399 [35/37] Linking target samples/server 00:01:10.399 [36/37] Linking target samples/null 00:01:10.399 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:10.399 INFO: autodetecting backend as ninja 00:01:10.399 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:10.692 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:10.692 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:10.692 ninja: no work to do. 00:01:15.961 The Meson build system 00:01:15.961 Version: 1.3.1 00:01:15.961 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:15.961 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:15.961 Build type: native build 00:01:15.961 Program cat found: YES (/usr/bin/cat) 00:01:15.961 Project name: DPDK 00:01:15.961 Project version: 24.03.0 00:01:15.961 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:15.961 C linker for the host machine: cc ld.bfd 2.39-16 00:01:15.961 Host machine cpu family: x86_64 00:01:15.961 Host machine cpu: x86_64 00:01:15.961 Message: ## Building in Developer Mode ## 00:01:15.961 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:15.961 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:15.961 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:15.961 Program python3 found: YES (/usr/bin/python3) 00:01:15.961 Program cat found: YES (/usr/bin/cat) 00:01:15.961 Compiler for C supports arguments -march=native: YES 00:01:15.961 Checking for size of "void *" : 8 00:01:15.961 Checking for size of "void *" : 8 (cached) 00:01:15.961 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:15.961 Library m found: YES 00:01:15.961 Library numa found: YES 00:01:15.961 Has header "numaif.h" : YES 00:01:15.961 Library fdt found: NO 00:01:15.961 Library execinfo found: NO 00:01:15.961 Has header "execinfo.h" : YES 00:01:15.961 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:15.961 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:15.961 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:15.961 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:15.961 Run-time dependency openssl found: YES 3.0.9 00:01:15.961 Run-time dependency libpcap found: YES 1.10.4 00:01:15.961 Has header "pcap.h" with dependency libpcap: YES 00:01:15.961 Compiler for C supports arguments -Wcast-qual: YES 00:01:15.961 Compiler for C supports arguments -Wdeprecated: YES 00:01:15.961 Compiler for C supports arguments -Wformat: YES 00:01:15.961 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:15.961 Compiler for C supports arguments -Wformat-security: NO 00:01:15.961 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:15.961 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:15.961 Compiler for C supports arguments -Wnested-externs: YES 00:01:15.961 Compiler for C supports arguments -Wold-style-definition: YES 00:01:15.961 Compiler for C supports arguments -Wpointer-arith: YES 00:01:15.961 Compiler for C supports arguments -Wsign-compare: YES 00:01:15.961 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:15.961 Compiler for C supports arguments -Wundef: YES 00:01:15.961 Compiler for C supports arguments -Wwrite-strings: YES 00:01:15.961 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:15.961 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:15.961 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:15.961 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:15.961 Program objdump found: YES (/usr/bin/objdump) 00:01:15.961 Compiler for C supports arguments -mavx512f: YES 00:01:15.961 Checking if "AVX512 checking" compiles: YES 00:01:15.961 Fetching value of define "__SSE4_2__" : 1 00:01:15.961 Fetching value of define "__AES__" : 1 00:01:15.961 Fetching value of define "__AVX__" : 1 00:01:15.961 Fetching value of define "__AVX2__" : 1 00:01:15.961 Fetching value of define "__AVX512BW__" : 1 00:01:15.961 Fetching value of define "__AVX512CD__" : 1 00:01:15.961 Fetching value of define "__AVX512DQ__" : 1 00:01:15.961 Fetching value of define "__AVX512F__" : 1 00:01:15.961 Fetching value of define "__AVX512VL__" : 1 00:01:15.961 Fetching value of define "__PCLMUL__" : 1 00:01:15.962 Fetching value of define "__RDRND__" : 1 00:01:15.962 Fetching value of define "__RDSEED__" : 1 00:01:15.962 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:15.962 Fetching value of define "__znver1__" : (undefined) 00:01:15.962 Fetching value of define "__znver2__" : (undefined) 00:01:15.962 Fetching value of define "__znver3__" : (undefined) 00:01:15.962 Fetching value of define "__znver4__" : (undefined) 00:01:15.962 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:15.962 Message: lib/log: Defining dependency "log" 00:01:15.962 Message: lib/kvargs: Defining dependency "kvargs" 00:01:15.962 Message: lib/telemetry: Defining dependency "telemetry" 00:01:15.962 Checking for function "getentropy" : NO 00:01:15.962 Message: lib/eal: Defining dependency "eal" 00:01:15.962 Message: lib/ring: Defining dependency "ring" 00:01:15.962 Message: lib/rcu: Defining dependency "rcu" 00:01:15.962 Message: lib/mempool: Defining dependency "mempool" 00:01:15.962 Message: lib/mbuf: Defining dependency "mbuf" 00:01:15.962 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:15.962 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:15.962 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:15.962 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:15.962 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:15.962 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:15.962 Compiler for C supports arguments -mpclmul: YES 00:01:15.962 Compiler for C supports arguments -maes: YES 00:01:15.962 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:15.962 Compiler for C supports arguments -mavx512bw: YES 00:01:15.962 Compiler for C supports arguments -mavx512dq: YES 00:01:15.962 Compiler for C supports arguments -mavx512vl: YES 00:01:15.962 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:15.962 Compiler for C supports arguments -mavx2: YES 00:01:15.962 Compiler for C supports arguments -mavx: YES 00:01:15.962 Message: lib/net: Defining dependency "net" 00:01:15.962 Message: lib/meter: Defining dependency "meter" 00:01:15.962 Message: lib/ethdev: Defining dependency "ethdev" 00:01:15.962 Message: lib/pci: Defining dependency "pci" 00:01:15.962 Message: lib/cmdline: Defining dependency "cmdline" 00:01:15.962 Message: lib/hash: Defining dependency "hash" 00:01:15.962 Message: lib/timer: Defining dependency "timer" 00:01:15.962 Message: lib/compressdev: Defining dependency "compressdev" 00:01:15.962 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:15.962 Message: lib/dmadev: Defining dependency "dmadev" 00:01:15.962 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:15.962 Message: lib/power: Defining dependency "power" 00:01:15.962 Message: lib/reorder: Defining dependency "reorder" 00:01:15.962 Message: lib/security: Defining dependency "security" 00:01:15.962 Has header "linux/userfaultfd.h" : YES 00:01:15.962 Has header "linux/vduse.h" : YES 00:01:15.962 Message: lib/vhost: Defining dependency "vhost" 00:01:15.962 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:15.962 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:15.962 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:15.962 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:15.962 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:15.962 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:15.962 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:15.962 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:15.962 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:15.962 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:15.962 Program doxygen found: YES (/usr/bin/doxygen) 00:01:15.962 Configuring doxy-api-html.conf using configuration 00:01:15.962 Configuring doxy-api-man.conf using configuration 00:01:15.962 Program mandb found: YES (/usr/bin/mandb) 00:01:15.962 Program sphinx-build found: NO 00:01:15.962 Configuring rte_build_config.h using configuration 00:01:15.962 Message: 00:01:15.962 ================= 00:01:15.962 Applications Enabled 00:01:15.962 ================= 00:01:15.962 00:01:15.962 apps: 00:01:15.962 00:01:15.962 00:01:15.962 Message: 00:01:15.962 ================= 00:01:15.962 Libraries Enabled 00:01:15.962 ================= 00:01:15.962 00:01:15.962 libs: 00:01:15.962 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:15.962 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:15.962 cryptodev, dmadev, power, reorder, security, vhost, 00:01:15.962 00:01:15.962 Message: 00:01:15.962 =============== 00:01:15.962 Drivers Enabled 00:01:15.962 =============== 00:01:15.962 00:01:15.962 common: 00:01:15.962 00:01:15.962 bus: 00:01:15.962 pci, vdev, 00:01:15.962 mempool: 00:01:15.962 ring, 00:01:15.962 dma: 00:01:15.962 00:01:15.962 net: 00:01:15.962 00:01:15.962 crypto: 00:01:15.962 00:01:15.962 compress: 00:01:15.962 00:01:15.962 vdpa: 00:01:15.962 00:01:15.962 00:01:15.962 Message: 00:01:15.962 ================= 00:01:15.962 Content Skipped 00:01:15.962 ================= 00:01:15.962 00:01:15.962 apps: 00:01:15.962 dumpcap: explicitly disabled via build config 00:01:15.962 graph: explicitly disabled via build config 00:01:15.962 pdump: explicitly disabled via build config 00:01:15.962 proc-info: explicitly disabled via build config 00:01:15.962 test-acl: explicitly disabled via build config 00:01:15.962 test-bbdev: explicitly disabled via build config 00:01:15.962 test-cmdline: explicitly disabled via build config 00:01:15.962 test-compress-perf: explicitly disabled via build config 00:01:15.962 test-crypto-perf: explicitly disabled via build config 00:01:15.962 test-dma-perf: explicitly disabled via build config 00:01:15.962 test-eventdev: explicitly disabled via build config 00:01:15.962 test-fib: explicitly disabled via build config 00:01:15.962 test-flow-perf: explicitly disabled via build config 00:01:15.962 test-gpudev: explicitly disabled via build config 00:01:15.962 test-mldev: explicitly disabled via build config 00:01:15.962 test-pipeline: explicitly disabled via build config 00:01:15.962 test-pmd: explicitly disabled via build config 00:01:15.962 test-regex: explicitly disabled via build config 00:01:15.962 test-sad: explicitly disabled via build config 00:01:15.962 test-security-perf: explicitly disabled via build config 00:01:15.962 00:01:15.962 libs: 00:01:15.962 argparse: explicitly disabled via build config 00:01:15.962 metrics: explicitly disabled via build config 00:01:15.962 acl: explicitly disabled via build config 00:01:15.962 bbdev: explicitly disabled via build config 00:01:15.962 bitratestats: explicitly disabled via build config 00:01:15.962 bpf: explicitly disabled via build config 00:01:15.962 cfgfile: explicitly disabled via build config 00:01:15.962 distributor: explicitly disabled via build config 00:01:15.962 efd: explicitly disabled via build config 00:01:15.962 eventdev: explicitly disabled via build config 00:01:15.962 dispatcher: explicitly disabled via build config 00:01:15.962 gpudev: explicitly disabled via build config 00:01:15.962 gro: explicitly disabled via build config 00:01:15.962 gso: explicitly disabled via build config 00:01:15.962 ip_frag: explicitly disabled via build config 00:01:15.962 jobstats: explicitly disabled via build config 00:01:15.962 latencystats: explicitly disabled via build config 00:01:15.962 lpm: explicitly disabled via build config 00:01:15.962 member: explicitly disabled via build config 00:01:15.962 pcapng: explicitly disabled via build config 00:01:15.962 rawdev: explicitly disabled via build config 00:01:15.962 regexdev: explicitly disabled via build config 00:01:15.962 mldev: explicitly disabled via build config 00:01:15.962 rib: explicitly disabled via build config 00:01:15.962 sched: explicitly disabled via build config 00:01:15.962 stack: explicitly disabled via build config 00:01:15.962 ipsec: explicitly disabled via build config 00:01:15.962 pdcp: explicitly disabled via build config 00:01:15.962 fib: explicitly disabled via build config 00:01:15.962 port: explicitly disabled via build config 00:01:15.962 pdump: explicitly disabled via build config 00:01:15.962 table: explicitly disabled via build config 00:01:15.962 pipeline: explicitly disabled via build config 00:01:15.962 graph: explicitly disabled via build config 00:01:15.962 node: explicitly disabled via build config 00:01:15.962 00:01:15.962 drivers: 00:01:15.962 common/cpt: not in enabled drivers build config 00:01:15.962 common/dpaax: not in enabled drivers build config 00:01:15.962 common/iavf: not in enabled drivers build config 00:01:15.962 common/idpf: not in enabled drivers build config 00:01:15.962 common/ionic: not in enabled drivers build config 00:01:15.962 common/mvep: not in enabled drivers build config 00:01:15.962 common/octeontx: not in enabled drivers build config 00:01:15.962 bus/auxiliary: not in enabled drivers build config 00:01:15.962 bus/cdx: not in enabled drivers build config 00:01:15.962 bus/dpaa: not in enabled drivers build config 00:01:15.962 bus/fslmc: not in enabled drivers build config 00:01:15.962 bus/ifpga: not in enabled drivers build config 00:01:15.962 bus/platform: not in enabled drivers build config 00:01:15.962 bus/uacce: not in enabled drivers build config 00:01:15.962 bus/vmbus: not in enabled drivers build config 00:01:15.962 common/cnxk: not in enabled drivers build config 00:01:15.962 common/mlx5: not in enabled drivers build config 00:01:15.962 common/nfp: not in enabled drivers build config 00:01:15.962 common/nitrox: not in enabled drivers build config 00:01:15.962 common/qat: not in enabled drivers build config 00:01:15.962 common/sfc_efx: not in enabled drivers build config 00:01:15.962 mempool/bucket: not in enabled drivers build config 00:01:15.962 mempool/cnxk: not in enabled drivers build config 00:01:15.962 mempool/dpaa: not in enabled drivers build config 00:01:15.962 mempool/dpaa2: not in enabled drivers build config 00:01:15.962 mempool/octeontx: not in enabled drivers build config 00:01:15.962 mempool/stack: not in enabled drivers build config 00:01:15.962 dma/cnxk: not in enabled drivers build config 00:01:15.962 dma/dpaa: not in enabled drivers build config 00:01:15.962 dma/dpaa2: not in enabled drivers build config 00:01:15.962 dma/hisilicon: not in enabled drivers build config 00:01:15.962 dma/idxd: not in enabled drivers build config 00:01:15.962 dma/ioat: not in enabled drivers build config 00:01:15.963 dma/skeleton: not in enabled drivers build config 00:01:15.963 net/af_packet: not in enabled drivers build config 00:01:15.963 net/af_xdp: not in enabled drivers build config 00:01:15.963 net/ark: not in enabled drivers build config 00:01:15.963 net/atlantic: not in enabled drivers build config 00:01:15.963 net/avp: not in enabled drivers build config 00:01:15.963 net/axgbe: not in enabled drivers build config 00:01:15.963 net/bnx2x: not in enabled drivers build config 00:01:15.963 net/bnxt: not in enabled drivers build config 00:01:15.963 net/bonding: not in enabled drivers build config 00:01:15.963 net/cnxk: not in enabled drivers build config 00:01:15.963 net/cpfl: not in enabled drivers build config 00:01:15.963 net/cxgbe: not in enabled drivers build config 00:01:15.963 net/dpaa: not in enabled drivers build config 00:01:15.963 net/dpaa2: not in enabled drivers build config 00:01:15.963 net/e1000: not in enabled drivers build config 00:01:15.963 net/ena: not in enabled drivers build config 00:01:15.963 net/enetc: not in enabled drivers build config 00:01:15.963 net/enetfec: not in enabled drivers build config 00:01:15.963 net/enic: not in enabled drivers build config 00:01:15.963 net/failsafe: not in enabled drivers build config 00:01:15.963 net/fm10k: not in enabled drivers build config 00:01:15.963 net/gve: not in enabled drivers build config 00:01:15.963 net/hinic: not in enabled drivers build config 00:01:15.963 net/hns3: not in enabled drivers build config 00:01:15.963 net/i40e: not in enabled drivers build config 00:01:15.963 net/iavf: not in enabled drivers build config 00:01:15.963 net/ice: not in enabled drivers build config 00:01:15.963 net/idpf: not in enabled drivers build config 00:01:15.963 net/igc: not in enabled drivers build config 00:01:15.963 net/ionic: not in enabled drivers build config 00:01:15.963 net/ipn3ke: not in enabled drivers build config 00:01:15.963 net/ixgbe: not in enabled drivers build config 00:01:15.963 net/mana: not in enabled drivers build config 00:01:15.963 net/memif: not in enabled drivers build config 00:01:15.963 net/mlx4: not in enabled drivers build config 00:01:15.963 net/mlx5: not in enabled drivers build config 00:01:15.963 net/mvneta: not in enabled drivers build config 00:01:15.963 net/mvpp2: not in enabled drivers build config 00:01:15.963 net/netvsc: not in enabled drivers build config 00:01:15.963 net/nfb: not in enabled drivers build config 00:01:15.963 net/nfp: not in enabled drivers build config 00:01:15.963 net/ngbe: not in enabled drivers build config 00:01:15.963 net/null: not in enabled drivers build config 00:01:15.963 net/octeontx: not in enabled drivers build config 00:01:15.963 net/octeon_ep: not in enabled drivers build config 00:01:15.963 net/pcap: not in enabled drivers build config 00:01:15.963 net/pfe: not in enabled drivers build config 00:01:15.963 net/qede: not in enabled drivers build config 00:01:15.963 net/ring: not in enabled drivers build config 00:01:15.963 net/sfc: not in enabled drivers build config 00:01:15.963 net/softnic: not in enabled drivers build config 00:01:15.963 net/tap: not in enabled drivers build config 00:01:15.963 net/thunderx: not in enabled drivers build config 00:01:15.963 net/txgbe: not in enabled drivers build config 00:01:15.963 net/vdev_netvsc: not in enabled drivers build config 00:01:15.963 net/vhost: not in enabled drivers build config 00:01:15.963 net/virtio: not in enabled drivers build config 00:01:15.963 net/vmxnet3: not in enabled drivers build config 00:01:15.963 raw/*: missing internal dependency, "rawdev" 00:01:15.963 crypto/armv8: not in enabled drivers build config 00:01:15.963 crypto/bcmfs: not in enabled drivers build config 00:01:15.963 crypto/caam_jr: not in enabled drivers build config 00:01:15.963 crypto/ccp: not in enabled drivers build config 00:01:15.963 crypto/cnxk: not in enabled drivers build config 00:01:15.963 crypto/dpaa_sec: not in enabled drivers build config 00:01:15.963 crypto/dpaa2_sec: not in enabled drivers build config 00:01:15.963 crypto/ipsec_mb: not in enabled drivers build config 00:01:15.963 crypto/mlx5: not in enabled drivers build config 00:01:15.963 crypto/mvsam: not in enabled drivers build config 00:01:15.963 crypto/nitrox: not in enabled drivers build config 00:01:15.963 crypto/null: not in enabled drivers build config 00:01:15.963 crypto/octeontx: not in enabled drivers build config 00:01:15.963 crypto/openssl: not in enabled drivers build config 00:01:15.963 crypto/scheduler: not in enabled drivers build config 00:01:15.963 crypto/uadk: not in enabled drivers build config 00:01:15.963 crypto/virtio: not in enabled drivers build config 00:01:15.963 compress/isal: not in enabled drivers build config 00:01:15.963 compress/mlx5: not in enabled drivers build config 00:01:15.963 compress/nitrox: not in enabled drivers build config 00:01:15.963 compress/octeontx: not in enabled drivers build config 00:01:15.963 compress/zlib: not in enabled drivers build config 00:01:15.963 regex/*: missing internal dependency, "regexdev" 00:01:15.963 ml/*: missing internal dependency, "mldev" 00:01:15.963 vdpa/ifc: not in enabled drivers build config 00:01:15.963 vdpa/mlx5: not in enabled drivers build config 00:01:15.963 vdpa/nfp: not in enabled drivers build config 00:01:15.963 vdpa/sfc: not in enabled drivers build config 00:01:15.963 event/*: missing internal dependency, "eventdev" 00:01:15.963 baseband/*: missing internal dependency, "bbdev" 00:01:15.963 gpu/*: missing internal dependency, "gpudev" 00:01:15.963 00:01:15.963 00:01:16.530 Build targets in project: 85 00:01:16.530 00:01:16.530 DPDK 24.03.0 00:01:16.530 00:01:16.530 User defined options 00:01:16.530 buildtype : debug 00:01:16.530 default_library : shared 00:01:16.530 libdir : lib 00:01:16.530 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:16.530 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:16.530 c_link_args : 00:01:16.530 cpu_instruction_set: native 00:01:16.530 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:16.530 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:01:16.530 enable_docs : false 00:01:16.530 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:16.530 enable_kmods : false 00:01:16.530 max_lcores : 128 00:01:16.530 tests : false 00:01:16.530 00:01:16.530 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:16.804 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:16.804 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:16.804 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:16.804 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:16.804 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:16.804 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:16.804 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:16.804 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:17.068 [8/268] Linking static target lib/librte_kvargs.a 00:01:17.068 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:17.068 [10/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:17.068 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:17.068 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:17.068 [13/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:17.068 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:17.068 [15/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:17.068 [16/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:17.068 [17/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:17.068 [18/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:17.068 [19/268] Linking static target lib/librte_log.a 00:01:17.068 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:17.068 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:17.068 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:17.068 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:17.068 [24/268] Linking static target lib/librte_pci.a 00:01:17.068 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:17.068 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:17.068 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:17.068 [28/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:17.068 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:17.068 [30/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:17.068 [31/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:17.332 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:17.332 [33/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:17.332 [34/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:17.332 [35/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:17.332 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:17.332 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:17.332 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:17.332 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:17.332 [40/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:17.332 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:17.332 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:17.332 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:17.332 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:17.332 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:17.332 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:17.332 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:17.332 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:17.332 [49/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:17.332 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:17.332 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:17.332 [52/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:17.332 [53/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:17.332 [54/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:17.591 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:17.591 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:17.591 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:17.591 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:17.591 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:17.591 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:17.591 [61/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.591 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:17.591 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:17.591 [64/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:17.591 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:17.591 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:17.591 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:17.591 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:17.591 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:17.591 [70/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:17.591 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:17.591 [72/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.591 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:17.591 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:17.591 [75/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:17.591 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:17.591 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:17.591 [78/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:17.591 [79/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:17.591 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:17.591 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:17.591 [82/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:17.591 [83/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:17.591 [84/268] Linking static target lib/librte_meter.a 00:01:17.591 [85/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:17.591 [86/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:17.591 [87/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:17.591 [88/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:17.591 [89/268] Linking static target lib/librte_telemetry.a 00:01:17.591 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:17.591 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:17.591 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:17.591 [93/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:17.591 [94/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:17.591 [95/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:17.591 [96/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:17.591 [97/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:17.591 [98/268] Linking static target lib/librte_ring.a 00:01:17.591 [99/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:17.591 [100/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:17.591 [101/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:17.591 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:17.591 [103/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:17.591 [104/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:17.591 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:17.591 [106/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:17.591 [107/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:17.591 [108/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:17.591 [109/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:17.591 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:17.591 [111/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:17.591 [112/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:17.591 [113/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:17.591 [114/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:17.591 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:17.591 [116/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:17.591 [117/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:17.591 [118/268] Linking static target lib/librte_net.a 00:01:17.591 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:17.591 [120/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:17.591 [121/268] Linking static target lib/librte_cmdline.a 00:01:17.591 [122/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:17.591 [123/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:17.591 [124/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:17.591 [125/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:17.591 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:17.591 [127/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:17.591 [128/268] Linking static target lib/librte_mempool.a 00:01:17.591 [129/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:17.591 [130/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:17.591 [131/268] Linking static target lib/librte_timer.a 00:01:17.591 [132/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:17.591 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:17.591 [134/268] Linking static target lib/librte_rcu.a 00:01:17.591 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:17.591 [136/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:17.591 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:17.591 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:17.591 [139/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:17.591 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:17.591 [141/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:17.591 [142/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:17.591 [143/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:17.591 [144/268] Linking static target lib/librte_eal.a 00:01:17.591 [145/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:17.591 [146/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:17.591 [147/268] Linking static target lib/librte_dmadev.a 00:01:17.850 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:17.850 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:17.850 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:17.850 [151/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:17.850 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:17.850 [153/268] Linking static target lib/librte_mbuf.a 00:01:17.850 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:17.850 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:17.850 [156/268] Linking static target lib/librte_compressdev.a 00:01:17.850 [157/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.850 [158/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.850 [159/268] Linking target lib/librte_log.so.24.1 00:01:17.851 [160/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:17.851 [161/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:17.851 [162/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:17.851 [163/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.851 [164/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:17.851 [165/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:17.851 [166/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:17.851 [167/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:17.851 [168/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:17.851 [169/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:17.851 [170/268] Linking static target lib/librte_power.a 00:01:17.851 [171/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:17.851 [172/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:17.851 [173/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:17.851 [174/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:17.851 [175/268] Linking static target lib/librte_security.a 00:01:17.851 [176/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:17.851 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:17.851 [178/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.851 [179/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:17.851 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:17.851 [181/268] Linking static target lib/librte_hash.a 00:01:17.851 [182/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:18.111 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:18.111 [184/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:18.111 [185/268] Linking target lib/librte_kvargs.so.24.1 00:01:18.111 [186/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:18.111 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:18.111 [188/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:18.111 [189/268] Linking static target lib/librte_reorder.a 00:01:18.111 [190/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.111 [191/268] Linking static target lib/librte_cryptodev.a 00:01:18.111 [192/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:18.111 [193/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:18.111 [194/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.111 [195/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:18.111 [196/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.111 [197/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:18.111 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:18.111 [199/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:18.111 [200/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:18.111 [201/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:18.111 [202/268] Linking target lib/librte_telemetry.so.24.1 00:01:18.111 [203/268] Linking static target drivers/librte_bus_vdev.a 00:01:18.111 [204/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:18.111 [205/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:18.370 [206/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:18.370 [207/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:18.370 [208/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:18.370 [209/268] Linking static target drivers/librte_bus_pci.a 00:01:18.370 [210/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:18.370 [211/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:18.370 [212/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:18.370 [213/268] Linking static target drivers/librte_mempool_ring.a 00:01:18.370 [214/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.370 [215/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:18.370 [216/268] Linking static target lib/librte_ethdev.a 00:01:18.629 [217/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.629 [218/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.629 [219/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.630 [220/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.630 [221/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.630 [222/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.630 [223/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:18.889 [224/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.889 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.889 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.147 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.716 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:19.716 [229/268] Linking static target lib/librte_vhost.a 00:01:20.285 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.665 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.240 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.148 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.148 [234/268] Linking target lib/librte_eal.so.24.1 00:01:30.407 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:30.407 [236/268] Linking target lib/librte_ring.so.24.1 00:01:30.407 [237/268] Linking target lib/librte_meter.so.24.1 00:01:30.407 [238/268] Linking target lib/librte_timer.so.24.1 00:01:30.407 [239/268] Linking target lib/librte_pci.so.24.1 00:01:30.407 [240/268] Linking target lib/librte_dmadev.so.24.1 00:01:30.407 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:30.666 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:30.666 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:30.666 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:30.666 [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:30.666 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:30.666 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:30.666 [248/268] Linking target lib/librte_mempool.so.24.1 00:01:30.666 [249/268] Linking target lib/librte_rcu.so.24.1 00:01:30.666 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:30.666 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:30.666 [252/268] Linking target lib/librte_mbuf.so.24.1 00:01:30.666 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:30.961 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:30.961 [255/268] Linking target lib/librte_net.so.24.1 00:01:30.961 [256/268] Linking target lib/librte_compressdev.so.24.1 00:01:30.961 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:01:30.961 [258/268] Linking target lib/librte_reorder.so.24.1 00:01:31.282 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:31.282 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:31.282 [261/268] Linking target lib/librte_hash.so.24.1 00:01:31.282 [262/268] Linking target lib/librte_cmdline.so.24.1 00:01:31.282 [263/268] Linking target lib/librte_security.so.24.1 00:01:31.282 [264/268] Linking target lib/librte_ethdev.so.24.1 00:01:31.282 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:31.282 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:31.282 [267/268] Linking target lib/librte_power.so.24.1 00:01:31.282 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:31.282 INFO: autodetecting backend as ninja 00:01:31.282 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 112 00:01:32.660 CC lib/log/log.o 00:01:32.660 CC lib/log/log_flags.o 00:01:32.660 CC lib/ut/ut.o 00:01:32.660 CC lib/log/log_deprecated.o 00:01:32.660 CC lib/ut_mock/mock.o 00:01:32.660 LIB libspdk_ut.a 00:01:32.660 LIB libspdk_log.a 00:01:32.660 LIB libspdk_ut_mock.a 00:01:32.660 SO libspdk_ut.so.2.0 00:01:32.660 SO libspdk_log.so.7.0 00:01:32.660 SO libspdk_ut_mock.so.6.0 00:01:32.661 SYMLINK libspdk_ut.so 00:01:32.661 SYMLINK libspdk_log.so 00:01:32.661 SYMLINK libspdk_ut_mock.so 00:01:32.920 CC lib/ioat/ioat.o 00:01:32.920 CC lib/util/bit_array.o 00:01:32.920 CC lib/util/base64.o 00:01:32.920 CC lib/util/cpuset.o 00:01:32.920 CC lib/util/crc16.o 00:01:32.920 CC lib/util/crc32.o 00:01:32.920 CC lib/util/crc32c.o 00:01:33.178 CC lib/util/crc32_ieee.o 00:01:33.178 CC lib/util/crc64.o 00:01:33.178 CC lib/util/dif.o 00:01:33.178 CC lib/util/fd.o 00:01:33.178 CC lib/util/file.o 00:01:33.178 CC lib/util/hexlify.o 00:01:33.178 CC lib/util/fd_group.o 00:01:33.178 CC lib/util/iov.o 00:01:33.178 CC lib/util/math.o 00:01:33.178 CC lib/util/net.o 00:01:33.178 CC lib/util/pipe.o 00:01:33.178 CC lib/util/strerror_tls.o 00:01:33.178 CC lib/dma/dma.o 00:01:33.178 CC lib/util/uuid.o 00:01:33.178 CC lib/util/string.o 00:01:33.178 CC lib/util/zipf.o 00:01:33.178 CC lib/util/xor.o 00:01:33.178 CXX lib/trace_parser/trace.o 00:01:33.178 CC lib/vfio_user/host/vfio_user_pci.o 00:01:33.178 CC lib/vfio_user/host/vfio_user.o 00:01:33.178 LIB libspdk_dma.a 00:01:33.178 LIB libspdk_ioat.a 00:01:33.438 SO libspdk_dma.so.4.0 00:01:33.438 SO libspdk_ioat.so.7.0 00:01:33.438 SYMLINK libspdk_dma.so 00:01:33.438 SYMLINK libspdk_ioat.so 00:01:33.438 LIB libspdk_vfio_user.a 00:01:33.438 LIB libspdk_util.a 00:01:33.438 SO libspdk_vfio_user.so.5.0 00:01:33.438 SO libspdk_util.so.10.0 00:01:33.438 SYMLINK libspdk_vfio_user.so 00:01:33.697 SYMLINK libspdk_util.so 00:01:33.697 LIB libspdk_trace_parser.a 00:01:33.697 SO libspdk_trace_parser.so.5.0 00:01:33.956 SYMLINK libspdk_trace_parser.so 00:01:33.956 CC lib/rdma_provider/common.o 00:01:33.956 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:33.956 CC lib/conf/conf.o 00:01:33.956 CC lib/env_dpdk/env.o 00:01:33.956 CC lib/env_dpdk/memory.o 00:01:33.956 CC lib/env_dpdk/init.o 00:01:33.956 CC lib/env_dpdk/pci.o 00:01:33.956 CC lib/env_dpdk/threads.o 00:01:33.956 CC lib/env_dpdk/pci_ioat.o 00:01:33.956 CC lib/env_dpdk/pci_virtio.o 00:01:33.956 CC lib/env_dpdk/pci_vmd.o 00:01:33.956 CC lib/idxd/idxd_user.o 00:01:33.956 CC lib/env_dpdk/pci_idxd.o 00:01:33.956 CC lib/idxd/idxd.o 00:01:33.956 CC lib/env_dpdk/pci_event.o 00:01:33.956 CC lib/env_dpdk/sigbus_handler.o 00:01:33.956 CC lib/json/json_parse.o 00:01:33.956 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:33.956 CC lib/idxd/idxd_kernel.o 00:01:33.956 CC lib/json/json_util.o 00:01:33.956 CC lib/env_dpdk/pci_dpdk.o 00:01:33.956 CC lib/json/json_write.o 00:01:33.956 CC lib/vmd/vmd.o 00:01:33.956 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:33.956 CC lib/vmd/led.o 00:01:33.956 CC lib/rdma_utils/rdma_utils.o 00:01:34.215 LIB libspdk_rdma_provider.a 00:01:34.215 SO libspdk_rdma_provider.so.6.0 00:01:34.215 LIB libspdk_conf.a 00:01:34.215 SO libspdk_conf.so.6.0 00:01:34.215 LIB libspdk_rdma_utils.a 00:01:34.215 SYMLINK libspdk_rdma_provider.so 00:01:34.215 SYMLINK libspdk_conf.so 00:01:34.215 LIB libspdk_json.a 00:01:34.215 SO libspdk_rdma_utils.so.1.0 00:01:34.474 SO libspdk_json.so.6.0 00:01:34.474 SYMLINK libspdk_rdma_utils.so 00:01:34.474 SYMLINK libspdk_json.so 00:01:34.474 LIB libspdk_idxd.a 00:01:34.474 SO libspdk_idxd.so.12.0 00:01:34.474 LIB libspdk_vmd.a 00:01:34.474 SO libspdk_vmd.so.6.0 00:01:34.734 SYMLINK libspdk_idxd.so 00:01:34.734 SYMLINK libspdk_vmd.so 00:01:34.734 CC lib/jsonrpc/jsonrpc_server.o 00:01:34.734 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:34.734 CC lib/jsonrpc/jsonrpc_client.o 00:01:34.734 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:34.993 LIB libspdk_jsonrpc.a 00:01:34.993 LIB libspdk_env_dpdk.a 00:01:34.993 SO libspdk_jsonrpc.so.6.0 00:01:34.993 SO libspdk_env_dpdk.so.15.0 00:01:34.993 SYMLINK libspdk_jsonrpc.so 00:01:35.252 SYMLINK libspdk_env_dpdk.so 00:01:35.511 CC lib/rpc/rpc.o 00:01:35.511 LIB libspdk_rpc.a 00:01:35.771 SO libspdk_rpc.so.6.0 00:01:35.771 SYMLINK libspdk_rpc.so 00:01:36.031 CC lib/trace/trace.o 00:01:36.031 CC lib/trace/trace_flags.o 00:01:36.031 CC lib/trace/trace_rpc.o 00:01:36.031 CC lib/notify/notify.o 00:01:36.031 CC lib/notify/notify_rpc.o 00:01:36.031 CC lib/keyring/keyring.o 00:01:36.031 CC lib/keyring/keyring_rpc.o 00:01:36.290 LIB libspdk_notify.a 00:01:36.290 LIB libspdk_trace.a 00:01:36.290 SO libspdk_notify.so.6.0 00:01:36.290 LIB libspdk_keyring.a 00:01:36.290 SO libspdk_trace.so.10.0 00:01:36.290 SYMLINK libspdk_notify.so 00:01:36.290 SO libspdk_keyring.so.1.0 00:01:36.290 SYMLINK libspdk_trace.so 00:01:36.290 SYMLINK libspdk_keyring.so 00:01:36.859 CC lib/sock/sock.o 00:01:36.859 CC lib/sock/sock_rpc.o 00:01:36.859 CC lib/thread/thread.o 00:01:36.859 CC lib/thread/iobuf.o 00:01:37.119 LIB libspdk_sock.a 00:01:37.119 SO libspdk_sock.so.10.0 00:01:37.119 SYMLINK libspdk_sock.so 00:01:37.379 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:37.379 CC lib/nvme/nvme_ctrlr.o 00:01:37.379 CC lib/nvme/nvme_fabric.o 00:01:37.379 CC lib/nvme/nvme_ns_cmd.o 00:01:37.379 CC lib/nvme/nvme_ns.o 00:01:37.379 CC lib/nvme/nvme_pcie_common.o 00:01:37.379 CC lib/nvme/nvme_pcie.o 00:01:37.379 CC lib/nvme/nvme_qpair.o 00:01:37.379 CC lib/nvme/nvme.o 00:01:37.379 CC lib/nvme/nvme_quirks.o 00:01:37.379 CC lib/nvme/nvme_transport.o 00:01:37.379 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:37.379 CC lib/nvme/nvme_discovery.o 00:01:37.379 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:37.379 CC lib/nvme/nvme_opal.o 00:01:37.379 CC lib/nvme/nvme_io_msg.o 00:01:37.379 CC lib/nvme/nvme_tcp.o 00:01:37.379 CC lib/nvme/nvme_poll_group.o 00:01:37.379 CC lib/nvme/nvme_zns.o 00:01:37.379 CC lib/nvme/nvme_stubs.o 00:01:37.379 CC lib/nvme/nvme_auth.o 00:01:37.379 CC lib/nvme/nvme_cuse.o 00:01:37.379 CC lib/nvme/nvme_vfio_user.o 00:01:37.379 CC lib/nvme/nvme_rdma.o 00:01:37.638 LIB libspdk_thread.a 00:01:37.897 SO libspdk_thread.so.10.1 00:01:37.897 SYMLINK libspdk_thread.so 00:01:38.156 CC lib/blob/blobstore.o 00:01:38.156 CC lib/blob/request.o 00:01:38.156 CC lib/blob/zeroes.o 00:01:38.156 CC lib/blob/blob_bs_dev.o 00:01:38.156 CC lib/virtio/virtio.o 00:01:38.156 CC lib/virtio/virtio_vhost_user.o 00:01:38.156 CC lib/virtio/virtio_vfio_user.o 00:01:38.156 CC lib/virtio/virtio_pci.o 00:01:38.156 CC lib/accel/accel_rpc.o 00:01:38.156 CC lib/accel/accel.o 00:01:38.156 CC lib/init/json_config.o 00:01:38.156 CC lib/accel/accel_sw.o 00:01:38.156 CC lib/init/subsystem.o 00:01:38.156 CC lib/init/subsystem_rpc.o 00:01:38.156 CC lib/init/rpc.o 00:01:38.156 CC lib/vfu_tgt/tgt_endpoint.o 00:01:38.156 CC lib/vfu_tgt/tgt_rpc.o 00:01:38.416 LIB libspdk_init.a 00:01:38.416 SO libspdk_init.so.5.0 00:01:38.416 LIB libspdk_virtio.a 00:01:38.416 LIB libspdk_vfu_tgt.a 00:01:38.416 SYMLINK libspdk_init.so 00:01:38.416 SO libspdk_virtio.so.7.0 00:01:38.675 SO libspdk_vfu_tgt.so.3.0 00:01:38.675 SYMLINK libspdk_virtio.so 00:01:38.675 SYMLINK libspdk_vfu_tgt.so 00:01:38.932 CC lib/event/app.o 00:01:38.932 CC lib/event/reactor.o 00:01:38.932 CC lib/event/log_rpc.o 00:01:38.932 CC lib/event/app_rpc.o 00:01:38.932 CC lib/event/scheduler_static.o 00:01:38.932 LIB libspdk_accel.a 00:01:38.932 SO libspdk_accel.so.16.0 00:01:38.932 LIB libspdk_nvme.a 00:01:38.932 SYMLINK libspdk_accel.so 00:01:39.191 SO libspdk_nvme.so.13.1 00:01:39.191 LIB libspdk_event.a 00:01:39.191 SO libspdk_event.so.14.0 00:01:39.449 SYMLINK libspdk_event.so 00:01:39.449 SYMLINK libspdk_nvme.so 00:01:39.449 CC lib/bdev/bdev.o 00:01:39.449 CC lib/bdev/part.o 00:01:39.449 CC lib/bdev/bdev_rpc.o 00:01:39.449 CC lib/bdev/bdev_zone.o 00:01:39.449 CC lib/bdev/scsi_nvme.o 00:01:40.388 LIB libspdk_blob.a 00:01:40.388 SO libspdk_blob.so.11.0 00:01:40.388 SYMLINK libspdk_blob.so 00:01:40.646 CC lib/blobfs/blobfs.o 00:01:40.646 CC lib/blobfs/tree.o 00:01:40.646 CC lib/lvol/lvol.o 00:01:41.212 LIB libspdk_bdev.a 00:01:41.212 SO libspdk_bdev.so.16.0 00:01:41.212 LIB libspdk_blobfs.a 00:01:41.212 SYMLINK libspdk_bdev.so 00:01:41.212 SO libspdk_blobfs.so.10.0 00:01:41.471 LIB libspdk_lvol.a 00:01:41.471 SYMLINK libspdk_blobfs.so 00:01:41.471 SO libspdk_lvol.so.10.0 00:01:41.471 SYMLINK libspdk_lvol.so 00:01:41.730 CC lib/nbd/nbd.o 00:01:41.730 CC lib/nbd/nbd_rpc.o 00:01:41.730 CC lib/ublk/ublk_rpc.o 00:01:41.730 CC lib/ublk/ublk.o 00:01:41.730 CC lib/scsi/lun.o 00:01:41.730 CC lib/nvmf/ctrlr_bdev.o 00:01:41.730 CC lib/nvmf/ctrlr.o 00:01:41.730 CC lib/scsi/dev.o 00:01:41.730 CC lib/nvmf/subsystem.o 00:01:41.730 CC lib/nvmf/ctrlr_discovery.o 00:01:41.730 CC lib/nvmf/nvmf.o 00:01:41.730 CC lib/ftl/ftl_core.o 00:01:41.730 CC lib/scsi/port.o 00:01:41.730 CC lib/nvmf/nvmf_rpc.o 00:01:41.730 CC lib/ftl/ftl_init.o 00:01:41.730 CC lib/scsi/scsi.o 00:01:41.730 CC lib/ftl/ftl_layout.o 00:01:41.730 CC lib/scsi/scsi_bdev.o 00:01:41.730 CC lib/ftl/ftl_io.o 00:01:41.730 CC lib/nvmf/tcp.o 00:01:41.730 CC lib/ftl/ftl_debug.o 00:01:41.730 CC lib/scsi/scsi_pr.o 00:01:41.730 CC lib/nvmf/transport.o 00:01:41.730 CC lib/nvmf/stubs.o 00:01:41.730 CC lib/scsi/scsi_rpc.o 00:01:41.730 CC lib/ftl/ftl_sb.o 00:01:41.730 CC lib/scsi/task.o 00:01:41.730 CC lib/nvmf/vfio_user.o 00:01:41.730 CC lib/ftl/ftl_l2p.o 00:01:41.730 CC lib/nvmf/mdns_server.o 00:01:41.730 CC lib/ftl/ftl_l2p_flat.o 00:01:41.730 CC lib/ftl/ftl_nv_cache.o 00:01:41.730 CC lib/nvmf/rdma.o 00:01:41.731 CC lib/ftl/ftl_band.o 00:01:41.731 CC lib/ftl/ftl_band_ops.o 00:01:41.731 CC lib/nvmf/auth.o 00:01:41.731 CC lib/ftl/ftl_writer.o 00:01:41.731 CC lib/ftl/ftl_rq.o 00:01:41.731 CC lib/ftl/ftl_reloc.o 00:01:41.731 CC lib/ftl/ftl_p2l.o 00:01:41.731 CC lib/ftl/ftl_l2p_cache.o 00:01:41.731 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:41.731 CC lib/ftl/mngt/ftl_mngt.o 00:01:41.731 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:41.731 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:41.731 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:41.731 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:41.731 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:41.731 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:41.731 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:41.731 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:41.731 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:41.731 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:41.731 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:41.731 CC lib/ftl/utils/ftl_conf.o 00:01:41.731 CC lib/ftl/utils/ftl_md.o 00:01:41.731 CC lib/ftl/utils/ftl_mempool.o 00:01:41.731 CC lib/ftl/utils/ftl_bitmap.o 00:01:41.731 CC lib/ftl/utils/ftl_property.o 00:01:41.731 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:41.731 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:41.731 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:41.731 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:41.731 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:41.731 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:41.731 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:41.731 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:41.731 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:41.731 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:41.731 CC lib/ftl/base/ftl_base_dev.o 00:01:41.731 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:41.731 CC lib/ftl/base/ftl_base_bdev.o 00:01:41.731 CC lib/ftl/ftl_trace.o 00:01:42.298 LIB libspdk_nbd.a 00:01:42.298 SO libspdk_nbd.so.7.0 00:01:42.298 SYMLINK libspdk_nbd.so 00:01:42.298 LIB libspdk_ublk.a 00:01:42.298 LIB libspdk_scsi.a 00:01:42.298 SO libspdk_ublk.so.3.0 00:01:42.298 SO libspdk_scsi.so.9.0 00:01:42.298 SYMLINK libspdk_ublk.so 00:01:42.556 SYMLINK libspdk_scsi.so 00:01:42.556 LIB libspdk_ftl.a 00:01:42.814 SO libspdk_ftl.so.9.0 00:01:42.814 CC lib/iscsi/conn.o 00:01:42.814 CC lib/vhost/vhost.o 00:01:42.814 CC lib/vhost/vhost_rpc.o 00:01:42.814 CC lib/iscsi/init_grp.o 00:01:42.814 CC lib/vhost/vhost_scsi.o 00:01:42.814 CC lib/vhost/rte_vhost_user.o 00:01:42.814 CC lib/iscsi/iscsi.o 00:01:42.814 CC lib/iscsi/md5.o 00:01:42.814 CC lib/vhost/vhost_blk.o 00:01:42.814 CC lib/iscsi/param.o 00:01:42.814 CC lib/iscsi/portal_grp.o 00:01:42.814 CC lib/iscsi/tgt_node.o 00:01:42.814 CC lib/iscsi/iscsi_subsystem.o 00:01:42.814 CC lib/iscsi/task.o 00:01:42.814 CC lib/iscsi/iscsi_rpc.o 00:01:43.072 SYMLINK libspdk_ftl.so 00:01:43.331 LIB libspdk_nvmf.a 00:01:43.589 SO libspdk_nvmf.so.19.0 00:01:43.589 LIB libspdk_vhost.a 00:01:43.589 SO libspdk_vhost.so.8.0 00:01:43.589 SYMLINK libspdk_nvmf.so 00:01:43.848 SYMLINK libspdk_vhost.so 00:01:43.848 LIB libspdk_iscsi.a 00:01:43.848 SO libspdk_iscsi.so.8.0 00:01:44.107 SYMLINK libspdk_iscsi.so 00:01:44.676 CC module/env_dpdk/env_dpdk_rpc.o 00:01:44.676 CC module/vfu_device/vfu_virtio.o 00:01:44.676 CC module/vfu_device/vfu_virtio_blk.o 00:01:44.676 CC module/vfu_device/vfu_virtio_scsi.o 00:01:44.676 CC module/vfu_device/vfu_virtio_rpc.o 00:01:44.676 LIB libspdk_env_dpdk_rpc.a 00:01:44.676 CC module/blob/bdev/blob_bdev.o 00:01:44.676 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:44.676 CC module/keyring/file/keyring.o 00:01:44.676 CC module/keyring/file/keyring_rpc.o 00:01:44.676 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:44.676 CC module/keyring/linux/keyring.o 00:01:44.676 CC module/accel/ioat/accel_ioat.o 00:01:44.676 CC module/sock/posix/posix.o 00:01:44.676 CC module/keyring/linux/keyring_rpc.o 00:01:44.676 CC module/accel/ioat/accel_ioat_rpc.o 00:01:44.676 CC module/accel/iaa/accel_iaa.o 00:01:44.676 CC module/scheduler/gscheduler/gscheduler.o 00:01:44.676 CC module/accel/iaa/accel_iaa_rpc.o 00:01:44.676 SO libspdk_env_dpdk_rpc.so.6.0 00:01:44.676 CC module/accel/dsa/accel_dsa.o 00:01:44.676 CC module/accel/error/accel_error.o 00:01:44.676 CC module/accel/dsa/accel_dsa_rpc.o 00:01:44.676 CC module/accel/error/accel_error_rpc.o 00:01:44.935 SYMLINK libspdk_env_dpdk_rpc.so 00:01:44.935 LIB libspdk_keyring_linux.a 00:01:44.935 LIB libspdk_keyring_file.a 00:01:44.935 LIB libspdk_scheduler_dpdk_governor.a 00:01:44.936 LIB libspdk_scheduler_gscheduler.a 00:01:44.936 SO libspdk_keyring_file.so.1.0 00:01:44.936 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:44.936 LIB libspdk_accel_ioat.a 00:01:44.936 LIB libspdk_scheduler_dynamic.a 00:01:44.936 SO libspdk_keyring_linux.so.1.0 00:01:44.936 LIB libspdk_accel_iaa.a 00:01:44.936 SO libspdk_scheduler_gscheduler.so.4.0 00:01:44.936 LIB libspdk_accel_error.a 00:01:44.936 LIB libspdk_blob_bdev.a 00:01:44.936 SO libspdk_scheduler_dynamic.so.4.0 00:01:44.936 SO libspdk_accel_ioat.so.6.0 00:01:44.936 SYMLINK libspdk_keyring_file.so 00:01:44.936 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:44.936 LIB libspdk_accel_dsa.a 00:01:44.936 SO libspdk_accel_iaa.so.3.0 00:01:44.936 SO libspdk_accel_error.so.2.0 00:01:44.936 SO libspdk_blob_bdev.so.11.0 00:01:44.936 SYMLINK libspdk_keyring_linux.so 00:01:44.936 SYMLINK libspdk_scheduler_gscheduler.so 00:01:44.936 SYMLINK libspdk_scheduler_dynamic.so 00:01:44.936 SO libspdk_accel_dsa.so.5.0 00:01:45.194 SYMLINK libspdk_accel_ioat.so 00:01:45.194 SYMLINK libspdk_accel_iaa.so 00:01:45.194 SYMLINK libspdk_accel_error.so 00:01:45.194 SYMLINK libspdk_blob_bdev.so 00:01:45.194 LIB libspdk_vfu_device.a 00:01:45.194 SYMLINK libspdk_accel_dsa.so 00:01:45.194 SO libspdk_vfu_device.so.3.0 00:01:45.194 SYMLINK libspdk_vfu_device.so 00:01:45.453 LIB libspdk_sock_posix.a 00:01:45.453 SO libspdk_sock_posix.so.6.0 00:01:45.453 SYMLINK libspdk_sock_posix.so 00:01:45.712 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:45.712 CC module/bdev/malloc/bdev_malloc.o 00:01:45.712 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:45.712 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:45.712 CC module/bdev/gpt/gpt.o 00:01:45.712 CC module/bdev/gpt/vbdev_gpt.o 00:01:45.712 CC module/bdev/lvol/vbdev_lvol.o 00:01:45.712 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:45.712 CC module/bdev/iscsi/bdev_iscsi.o 00:01:45.712 CC module/bdev/raid/bdev_raid.o 00:01:45.712 CC module/bdev/aio/bdev_aio.o 00:01:45.712 CC module/bdev/aio/bdev_aio_rpc.o 00:01:45.712 CC module/bdev/raid/bdev_raid_sb.o 00:01:45.712 CC module/bdev/raid/bdev_raid_rpc.o 00:01:45.712 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:45.712 CC module/bdev/raid/raid0.o 00:01:45.712 CC module/bdev/raid/raid1.o 00:01:45.712 CC module/bdev/raid/concat.o 00:01:45.712 CC module/bdev/ftl/bdev_ftl.o 00:01:45.712 CC module/bdev/split/vbdev_split.o 00:01:45.712 CC module/bdev/split/vbdev_split_rpc.o 00:01:45.712 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:45.712 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:45.712 CC module/bdev/nvme/bdev_nvme.o 00:01:45.712 CC module/bdev/delay/vbdev_delay.o 00:01:45.712 CC module/bdev/passthru/vbdev_passthru.o 00:01:45.712 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:45.712 CC module/bdev/nvme/nvme_rpc.o 00:01:45.712 CC module/bdev/error/vbdev_error.o 00:01:45.712 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:45.712 CC module/bdev/error/vbdev_error_rpc.o 00:01:45.712 CC module/bdev/nvme/bdev_mdns_client.o 00:01:45.712 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:45.712 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:45.712 CC module/bdev/nvme/vbdev_opal.o 00:01:45.712 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:45.712 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:45.712 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:45.712 CC module/blobfs/bdev/blobfs_bdev.o 00:01:45.712 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:45.712 CC module/bdev/null/bdev_null.o 00:01:45.712 CC module/bdev/null/bdev_null_rpc.o 00:01:45.970 LIB libspdk_blobfs_bdev.a 00:01:45.970 SO libspdk_blobfs_bdev.so.6.0 00:01:45.970 LIB libspdk_bdev_split.a 00:01:45.970 LIB libspdk_bdev_gpt.a 00:01:45.970 LIB libspdk_bdev_error.a 00:01:45.970 SO libspdk_bdev_split.so.6.0 00:01:45.970 LIB libspdk_bdev_null.a 00:01:45.970 SYMLINK libspdk_blobfs_bdev.so 00:01:45.970 LIB libspdk_bdev_ftl.a 00:01:45.970 LIB libspdk_bdev_zone_block.a 00:01:45.970 SO libspdk_bdev_error.so.6.0 00:01:45.970 SO libspdk_bdev_gpt.so.6.0 00:01:45.970 LIB libspdk_bdev_aio.a 00:01:45.970 LIB libspdk_bdev_passthru.a 00:01:45.970 LIB libspdk_bdev_malloc.a 00:01:45.970 LIB libspdk_bdev_iscsi.a 00:01:45.970 SO libspdk_bdev_null.so.6.0 00:01:45.970 SO libspdk_bdev_zone_block.so.6.0 00:01:45.970 SO libspdk_bdev_ftl.so.6.0 00:01:45.970 SO libspdk_bdev_passthru.so.6.0 00:01:45.970 SYMLINK libspdk_bdev_split.so 00:01:45.970 LIB libspdk_bdev_delay.a 00:01:45.970 SO libspdk_bdev_malloc.so.6.0 00:01:45.970 SO libspdk_bdev_aio.so.6.0 00:01:45.970 SYMLINK libspdk_bdev_error.so 00:01:45.970 SO libspdk_bdev_iscsi.so.6.0 00:01:45.970 SYMLINK libspdk_bdev_gpt.so 00:01:45.970 SYMLINK libspdk_bdev_zone_block.so 00:01:45.970 SYMLINK libspdk_bdev_null.so 00:01:45.970 SO libspdk_bdev_delay.so.6.0 00:01:45.970 SYMLINK libspdk_bdev_passthru.so 00:01:45.970 SYMLINK libspdk_bdev_ftl.so 00:01:45.970 SYMLINK libspdk_bdev_malloc.so 00:01:45.970 LIB libspdk_bdev_lvol.a 00:01:45.970 SYMLINK libspdk_bdev_aio.so 00:01:45.970 SYMLINK libspdk_bdev_iscsi.so 00:01:45.970 LIB libspdk_bdev_virtio.a 00:01:46.229 SO libspdk_bdev_lvol.so.6.0 00:01:46.229 SYMLINK libspdk_bdev_delay.so 00:01:46.229 SO libspdk_bdev_virtio.so.6.0 00:01:46.229 SYMLINK libspdk_bdev_lvol.so 00:01:46.229 SYMLINK libspdk_bdev_virtio.so 00:01:46.488 LIB libspdk_bdev_raid.a 00:01:46.488 SO libspdk_bdev_raid.so.6.0 00:01:46.488 SYMLINK libspdk_bdev_raid.so 00:01:47.426 LIB libspdk_bdev_nvme.a 00:01:47.426 SO libspdk_bdev_nvme.so.7.0 00:01:47.426 SYMLINK libspdk_bdev_nvme.so 00:01:47.996 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:47.996 CC module/event/subsystems/sock/sock.o 00:01:47.996 CC module/event/subsystems/scheduler/scheduler.o 00:01:47.996 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:47.996 CC module/event/subsystems/iobuf/iobuf.o 00:01:47.996 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:47.996 CC module/event/subsystems/keyring/keyring.o 00:01:47.996 CC module/event/subsystems/vmd/vmd.o 00:01:48.255 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:48.255 LIB libspdk_event_scheduler.a 00:01:48.255 LIB libspdk_event_vhost_blk.a 00:01:48.255 LIB libspdk_event_vfu_tgt.a 00:01:48.255 LIB libspdk_event_sock.a 00:01:48.255 LIB libspdk_event_vmd.a 00:01:48.255 LIB libspdk_event_keyring.a 00:01:48.255 SO libspdk_event_scheduler.so.4.0 00:01:48.255 LIB libspdk_event_iobuf.a 00:01:48.255 SO libspdk_event_vhost_blk.so.3.0 00:01:48.255 SO libspdk_event_sock.so.5.0 00:01:48.255 SO libspdk_event_vmd.so.6.0 00:01:48.255 SO libspdk_event_vfu_tgt.so.3.0 00:01:48.255 SO libspdk_event_keyring.so.1.0 00:01:48.255 SO libspdk_event_iobuf.so.3.0 00:01:48.255 SYMLINK libspdk_event_scheduler.so 00:01:48.255 SYMLINK libspdk_event_vhost_blk.so 00:01:48.255 SYMLINK libspdk_event_sock.so 00:01:48.255 SYMLINK libspdk_event_vmd.so 00:01:48.255 SYMLINK libspdk_event_vfu_tgt.so 00:01:48.255 SYMLINK libspdk_event_keyring.so 00:01:48.514 SYMLINK libspdk_event_iobuf.so 00:01:48.774 CC module/event/subsystems/accel/accel.o 00:01:48.774 LIB libspdk_event_accel.a 00:01:49.033 SO libspdk_event_accel.so.6.0 00:01:49.033 SYMLINK libspdk_event_accel.so 00:01:49.292 CC module/event/subsystems/bdev/bdev.o 00:01:49.552 LIB libspdk_event_bdev.a 00:01:49.552 SO libspdk_event_bdev.so.6.0 00:01:49.552 SYMLINK libspdk_event_bdev.so 00:01:50.121 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:50.121 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:50.121 CC module/event/subsystems/scsi/scsi.o 00:01:50.121 CC module/event/subsystems/ublk/ublk.o 00:01:50.121 CC module/event/subsystems/nbd/nbd.o 00:01:50.121 LIB libspdk_event_scsi.a 00:01:50.121 LIB libspdk_event_ublk.a 00:01:50.121 LIB libspdk_event_nbd.a 00:01:50.121 LIB libspdk_event_nvmf.a 00:01:50.121 SO libspdk_event_scsi.so.6.0 00:01:50.121 SO libspdk_event_ublk.so.3.0 00:01:50.121 SO libspdk_event_nbd.so.6.0 00:01:50.121 SO libspdk_event_nvmf.so.6.0 00:01:50.121 SYMLINK libspdk_event_scsi.so 00:01:50.121 SYMLINK libspdk_event_nbd.so 00:01:50.121 SYMLINK libspdk_event_ublk.so 00:01:50.380 SYMLINK libspdk_event_nvmf.so 00:01:50.639 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:50.639 CC module/event/subsystems/iscsi/iscsi.o 00:01:50.639 LIB libspdk_event_vhost_scsi.a 00:01:50.639 SO libspdk_event_vhost_scsi.so.3.0 00:01:50.639 LIB libspdk_event_iscsi.a 00:01:50.899 SYMLINK libspdk_event_vhost_scsi.so 00:01:50.899 SO libspdk_event_iscsi.so.6.0 00:01:50.899 SYMLINK libspdk_event_iscsi.so 00:01:51.159 SO libspdk.so.6.0 00:01:51.159 SYMLINK libspdk.so 00:01:51.418 CC test/rpc_client/rpc_client_test.o 00:01:51.418 CC app/spdk_top/spdk_top.o 00:01:51.418 CC app/trace_record/trace_record.o 00:01:51.418 TEST_HEADER include/spdk/accel_module.h 00:01:51.418 TEST_HEADER include/spdk/accel.h 00:01:51.418 TEST_HEADER include/spdk/assert.h 00:01:51.418 TEST_HEADER include/spdk/base64.h 00:01:51.418 TEST_HEADER include/spdk/barrier.h 00:01:51.418 TEST_HEADER include/spdk/bdev_module.h 00:01:51.418 TEST_HEADER include/spdk/bdev.h 00:01:51.418 TEST_HEADER include/spdk/bit_array.h 00:01:51.418 TEST_HEADER include/spdk/bdev_zone.h 00:01:51.418 TEST_HEADER include/spdk/bit_pool.h 00:01:51.418 CC app/spdk_nvme_discover/discovery_aer.o 00:01:51.418 TEST_HEADER include/spdk/blob_bdev.h 00:01:51.418 CC app/spdk_lspci/spdk_lspci.o 00:01:51.418 CC app/spdk_nvme_identify/identify.o 00:01:51.418 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:51.418 TEST_HEADER include/spdk/blobfs.h 00:01:51.418 TEST_HEADER include/spdk/conf.h 00:01:51.418 CXX app/trace/trace.o 00:01:51.418 TEST_HEADER include/spdk/cpuset.h 00:01:51.418 TEST_HEADER include/spdk/config.h 00:01:51.418 TEST_HEADER include/spdk/blob.h 00:01:51.418 TEST_HEADER include/spdk/crc16.h 00:01:51.418 TEST_HEADER include/spdk/dif.h 00:01:51.418 TEST_HEADER include/spdk/crc32.h 00:01:51.418 TEST_HEADER include/spdk/crc64.h 00:01:51.418 TEST_HEADER include/spdk/dma.h 00:01:51.418 TEST_HEADER include/spdk/endian.h 00:01:51.418 TEST_HEADER include/spdk/env_dpdk.h 00:01:51.418 TEST_HEADER include/spdk/env.h 00:01:51.418 TEST_HEADER include/spdk/event.h 00:01:51.418 TEST_HEADER include/spdk/fd_group.h 00:01:51.418 TEST_HEADER include/spdk/fd.h 00:01:51.418 TEST_HEADER include/spdk/file.h 00:01:51.418 TEST_HEADER include/spdk/ftl.h 00:01:51.418 TEST_HEADER include/spdk/gpt_spec.h 00:01:51.418 TEST_HEADER include/spdk/hexlify.h 00:01:51.418 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:51.418 TEST_HEADER include/spdk/idxd.h 00:01:51.418 TEST_HEADER include/spdk/histogram_data.h 00:01:51.418 TEST_HEADER include/spdk/idxd_spec.h 00:01:51.418 TEST_HEADER include/spdk/init.h 00:01:51.418 TEST_HEADER include/spdk/ioat.h 00:01:51.418 CC app/spdk_nvme_perf/perf.o 00:01:51.418 TEST_HEADER include/spdk/ioat_spec.h 00:01:51.418 TEST_HEADER include/spdk/iscsi_spec.h 00:01:51.418 TEST_HEADER include/spdk/json.h 00:01:51.418 TEST_HEADER include/spdk/keyring.h 00:01:51.418 TEST_HEADER include/spdk/jsonrpc.h 00:01:51.418 TEST_HEADER include/spdk/keyring_module.h 00:01:51.418 TEST_HEADER include/spdk/likely.h 00:01:51.418 TEST_HEADER include/spdk/lvol.h 00:01:51.418 TEST_HEADER include/spdk/mmio.h 00:01:51.418 TEST_HEADER include/spdk/log.h 00:01:51.418 TEST_HEADER include/spdk/memory.h 00:01:51.418 CC app/spdk_tgt/spdk_tgt.o 00:01:51.418 TEST_HEADER include/spdk/nbd.h 00:01:51.418 TEST_HEADER include/spdk/net.h 00:01:51.418 CC app/iscsi_tgt/iscsi_tgt.o 00:01:51.418 TEST_HEADER include/spdk/nvme.h 00:01:51.418 TEST_HEADER include/spdk/notify.h 00:01:51.418 TEST_HEADER include/spdk/nvme_intel.h 00:01:51.418 CC app/nvmf_tgt/nvmf_main.o 00:01:51.418 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:51.418 TEST_HEADER include/spdk/nvme_spec.h 00:01:51.418 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:51.418 TEST_HEADER include/spdk/nvme_zns.h 00:01:51.418 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:51.418 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:51.418 TEST_HEADER include/spdk/nvmf.h 00:01:51.418 CC app/spdk_dd/spdk_dd.o 00:01:51.418 TEST_HEADER include/spdk/nvmf_spec.h 00:01:51.418 TEST_HEADER include/spdk/nvmf_transport.h 00:01:51.418 TEST_HEADER include/spdk/opal.h 00:01:51.418 TEST_HEADER include/spdk/pci_ids.h 00:01:51.418 TEST_HEADER include/spdk/opal_spec.h 00:01:51.418 TEST_HEADER include/spdk/pipe.h 00:01:51.418 TEST_HEADER include/spdk/reduce.h 00:01:51.418 TEST_HEADER include/spdk/rpc.h 00:01:51.418 TEST_HEADER include/spdk/queue.h 00:01:51.418 TEST_HEADER include/spdk/scheduler.h 00:01:51.418 TEST_HEADER include/spdk/scsi.h 00:01:51.418 TEST_HEADER include/spdk/scsi_spec.h 00:01:51.418 TEST_HEADER include/spdk/stdinc.h 00:01:51.418 TEST_HEADER include/spdk/sock.h 00:01:51.418 TEST_HEADER include/spdk/trace.h 00:01:51.418 TEST_HEADER include/spdk/string.h 00:01:51.418 TEST_HEADER include/spdk/thread.h 00:01:51.418 TEST_HEADER include/spdk/trace_parser.h 00:01:51.418 TEST_HEADER include/spdk/ublk.h 00:01:51.418 TEST_HEADER include/spdk/tree.h 00:01:51.418 TEST_HEADER include/spdk/version.h 00:01:51.418 TEST_HEADER include/spdk/util.h 00:01:51.418 TEST_HEADER include/spdk/uuid.h 00:01:51.418 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:51.418 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:51.418 TEST_HEADER include/spdk/vhost.h 00:01:51.418 TEST_HEADER include/spdk/vmd.h 00:01:51.418 TEST_HEADER include/spdk/xor.h 00:01:51.418 CXX test/cpp_headers/accel.o 00:01:51.418 TEST_HEADER include/spdk/zipf.h 00:01:51.418 CXX test/cpp_headers/accel_module.o 00:01:51.418 CXX test/cpp_headers/assert.o 00:01:51.418 CXX test/cpp_headers/barrier.o 00:01:51.418 CXX test/cpp_headers/base64.o 00:01:51.418 CXX test/cpp_headers/bdev.o 00:01:51.707 CXX test/cpp_headers/bdev_module.o 00:01:51.707 CXX test/cpp_headers/bdev_zone.o 00:01:51.707 CXX test/cpp_headers/bit_array.o 00:01:51.707 CXX test/cpp_headers/blob_bdev.o 00:01:51.707 CXX test/cpp_headers/bit_pool.o 00:01:51.707 CXX test/cpp_headers/blobfs_bdev.o 00:01:51.707 CXX test/cpp_headers/blobfs.o 00:01:51.707 CXX test/cpp_headers/config.o 00:01:51.707 CXX test/cpp_headers/blob.o 00:01:51.707 CXX test/cpp_headers/conf.o 00:01:51.707 CXX test/cpp_headers/cpuset.o 00:01:51.707 CXX test/cpp_headers/crc16.o 00:01:51.707 CXX test/cpp_headers/crc32.o 00:01:51.707 CXX test/cpp_headers/crc64.o 00:01:51.707 CXX test/cpp_headers/dif.o 00:01:51.707 CXX test/cpp_headers/dma.o 00:01:51.707 CXX test/cpp_headers/endian.o 00:01:51.707 CXX test/cpp_headers/env_dpdk.o 00:01:51.707 CXX test/cpp_headers/env.o 00:01:51.707 CXX test/cpp_headers/fd_group.o 00:01:51.707 CXX test/cpp_headers/fd.o 00:01:51.707 CXX test/cpp_headers/event.o 00:01:51.707 CXX test/cpp_headers/file.o 00:01:51.707 CXX test/cpp_headers/ftl.o 00:01:51.707 CXX test/cpp_headers/hexlify.o 00:01:51.707 CXX test/cpp_headers/idxd.o 00:01:51.707 CXX test/cpp_headers/gpt_spec.o 00:01:51.707 CXX test/cpp_headers/idxd_spec.o 00:01:51.707 CXX test/cpp_headers/histogram_data.o 00:01:51.707 CXX test/cpp_headers/init.o 00:01:51.707 CXX test/cpp_headers/ioat_spec.o 00:01:51.707 CXX test/cpp_headers/ioat.o 00:01:51.707 CXX test/cpp_headers/iscsi_spec.o 00:01:51.707 CXX test/cpp_headers/jsonrpc.o 00:01:51.707 CXX test/cpp_headers/json.o 00:01:51.707 CXX test/cpp_headers/keyring_module.o 00:01:51.707 CXX test/cpp_headers/keyring.o 00:01:51.707 CXX test/cpp_headers/likely.o 00:01:51.707 CXX test/cpp_headers/log.o 00:01:51.707 CXX test/cpp_headers/lvol.o 00:01:51.707 CXX test/cpp_headers/memory.o 00:01:51.707 CXX test/cpp_headers/mmio.o 00:01:51.707 CXX test/cpp_headers/nbd.o 00:01:51.707 CXX test/cpp_headers/net.o 00:01:51.707 CXX test/cpp_headers/nvme.o 00:01:51.707 CXX test/cpp_headers/notify.o 00:01:51.707 CXX test/cpp_headers/nvme_ocssd.o 00:01:51.707 CXX test/cpp_headers/nvme_intel.o 00:01:51.707 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:51.707 CXX test/cpp_headers/nvme_zns.o 00:01:51.707 CXX test/cpp_headers/nvme_spec.o 00:01:51.707 CXX test/cpp_headers/nvmf_cmd.o 00:01:51.707 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:51.707 CXX test/cpp_headers/nvmf.o 00:01:51.707 CXX test/cpp_headers/nvmf_spec.o 00:01:51.707 CXX test/cpp_headers/nvmf_transport.o 00:01:51.707 CC examples/util/zipf/zipf.o 00:01:51.707 CXX test/cpp_headers/opal.o 00:01:51.707 CXX test/cpp_headers/opal_spec.o 00:01:51.707 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:51.707 CXX test/cpp_headers/pci_ids.o 00:01:51.707 CXX test/cpp_headers/pipe.o 00:01:51.707 CXX test/cpp_headers/queue.o 00:01:51.707 CXX test/cpp_headers/reduce.o 00:01:51.707 CXX test/cpp_headers/rpc.o 00:01:51.707 CXX test/cpp_headers/scheduler.o 00:01:51.707 CXX test/cpp_headers/scsi.o 00:01:51.707 CXX test/cpp_headers/scsi_spec.o 00:01:51.707 CXX test/cpp_headers/sock.o 00:01:51.707 CXX test/cpp_headers/stdinc.o 00:01:51.707 CXX test/cpp_headers/string.o 00:01:51.707 CC test/thread/poller_perf/poller_perf.o 00:01:51.707 CXX test/cpp_headers/thread.o 00:01:51.707 CXX test/cpp_headers/trace.o 00:01:51.707 CXX test/cpp_headers/trace_parser.o 00:01:51.707 CXX test/cpp_headers/tree.o 00:01:51.707 CXX test/cpp_headers/ublk.o 00:01:51.707 CXX test/cpp_headers/util.o 00:01:51.707 CC test/env/memory/memory_ut.o 00:01:51.707 CC examples/ioat/perf/perf.o 00:01:51.707 CC examples/ioat/verify/verify.o 00:01:51.707 CC test/app/histogram_perf/histogram_perf.o 00:01:51.707 CC test/env/pci/pci_ut.o 00:01:51.707 CC test/env/vtophys/vtophys.o 00:01:51.707 CC test/app/jsoncat/jsoncat.o 00:01:51.707 CXX test/cpp_headers/uuid.o 00:01:51.707 CC test/dma/test_dma/test_dma.o 00:01:51.707 CC test/app/stub/stub.o 00:01:51.707 CC test/app/bdev_svc/bdev_svc.o 00:01:51.707 CXX test/cpp_headers/version.o 00:01:51.707 CC app/fio/nvme/fio_plugin.o 00:01:51.707 CXX test/cpp_headers/vfio_user_pci.o 00:01:52.042 CC app/fio/bdev/fio_plugin.o 00:01:52.042 LINK spdk_lspci 00:01:52.042 CXX test/cpp_headers/vfio_user_spec.o 00:01:52.042 LINK rpc_client_test 00:01:52.042 LINK spdk_nvme_discover 00:01:52.042 CC test/env/mem_callbacks/mem_callbacks.o 00:01:52.314 LINK nvmf_tgt 00:01:52.314 LINK interrupt_tgt 00:01:52.314 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:52.314 LINK spdk_tgt 00:01:52.314 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:52.314 LINK spdk_trace_record 00:01:52.314 LINK iscsi_tgt 00:01:52.314 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:52.314 LINK zipf 00:01:52.314 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:52.314 LINK env_dpdk_post_init 00:01:52.314 LINK histogram_perf 00:01:52.314 CXX test/cpp_headers/vhost.o 00:01:52.314 LINK poller_perf 00:01:52.314 LINK jsoncat 00:01:52.314 CXX test/cpp_headers/vmd.o 00:01:52.314 CXX test/cpp_headers/xor.o 00:01:52.314 CXX test/cpp_headers/zipf.o 00:01:52.314 LINK vtophys 00:01:52.572 LINK bdev_svc 00:01:52.573 LINK stub 00:01:52.573 LINK verify 00:01:52.573 LINK ioat_perf 00:01:52.573 LINK spdk_dd 00:01:52.573 LINK spdk_trace 00:01:52.573 LINK test_dma 00:01:52.831 LINK pci_ut 00:01:52.831 LINK spdk_bdev 00:01:52.831 LINK spdk_nvme 00:01:52.831 LINK vhost_fuzz 00:01:52.831 LINK nvme_fuzz 00:01:52.831 LINK spdk_top 00:01:52.831 LINK spdk_nvme_identify 00:01:52.831 LINK spdk_nvme_perf 00:01:52.831 LINK mem_callbacks 00:01:52.831 CC examples/idxd/perf/perf.o 00:01:52.831 CC examples/sock/hello_world/hello_sock.o 00:01:52.831 CC examples/vmd/led/led.o 00:01:52.831 CC examples/vmd/lsvmd/lsvmd.o 00:01:52.831 CC examples/thread/thread/thread_ex.o 00:01:53.090 CC test/event/reactor/reactor.o 00:01:53.090 CC test/event/reactor_perf/reactor_perf.o 00:01:53.090 CC test/event/app_repeat/app_repeat.o 00:01:53.090 CC test/event/event_perf/event_perf.o 00:01:53.090 CC app/vhost/vhost.o 00:01:53.090 CC test/event/scheduler/scheduler.o 00:01:53.090 LINK led 00:01:53.090 LINK lsvmd 00:01:53.090 CC test/nvme/reset/reset.o 00:01:53.090 LINK reactor_perf 00:01:53.090 LINK reactor 00:01:53.090 CC test/nvme/overhead/overhead.o 00:01:53.090 CC test/nvme/e2edp/nvme_dp.o 00:01:53.090 CC test/nvme/startup/startup.o 00:01:53.090 LINK hello_sock 00:01:53.090 CC test/nvme/reserve/reserve.o 00:01:53.090 CC test/nvme/fdp/fdp.o 00:01:53.090 CC test/nvme/connect_stress/connect_stress.o 00:01:53.090 CC test/nvme/aer/aer.o 00:01:53.090 CC test/nvme/fused_ordering/fused_ordering.o 00:01:53.090 CC test/nvme/boot_partition/boot_partition.o 00:01:53.090 CC test/nvme/simple_copy/simple_copy.o 00:01:53.090 CC test/nvme/cuse/cuse.o 00:01:53.090 LINK event_perf 00:01:53.090 CC test/nvme/err_injection/err_injection.o 00:01:53.090 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:53.090 CC test/nvme/compliance/nvme_compliance.o 00:01:53.090 CC test/nvme/sgl/sgl.o 00:01:53.090 CC test/accel/dif/dif.o 00:01:53.090 LINK app_repeat 00:01:53.090 LINK thread 00:01:53.090 CC test/blobfs/mkfs/mkfs.o 00:01:53.349 LINK vhost 00:01:53.349 LINK idxd_perf 00:01:53.349 LINK memory_ut 00:01:53.349 LINK scheduler 00:01:53.349 CC test/lvol/esnap/esnap.o 00:01:53.349 LINK boot_partition 00:01:53.349 LINK startup 00:01:53.349 LINK connect_stress 00:01:53.349 LINK err_injection 00:01:53.349 LINK fused_ordering 00:01:53.349 LINK doorbell_aers 00:01:53.349 LINK simple_copy 00:01:53.349 LINK reserve 00:01:53.349 LINK reset 00:01:53.349 LINK sgl 00:01:53.349 LINK mkfs 00:01:53.349 LINK overhead 00:01:53.349 LINK aer 00:01:53.349 LINK nvme_dp 00:01:53.606 LINK fdp 00:01:53.606 LINK nvme_compliance 00:01:53.606 LINK dif 00:01:53.606 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:53.606 CC examples/nvme/arbitration/arbitration.o 00:01:53.606 CC examples/nvme/hello_world/hello_world.o 00:01:53.606 CC examples/nvme/hotplug/hotplug.o 00:01:53.606 CC examples/nvme/reconnect/reconnect.o 00:01:53.606 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:53.606 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:53.606 CC examples/nvme/abort/abort.o 00:01:53.863 LINK iscsi_fuzz 00:01:53.863 CC examples/blob/cli/blobcli.o 00:01:53.863 CC examples/blob/hello_world/hello_blob.o 00:01:53.863 CC examples/accel/perf/accel_perf.o 00:01:53.863 LINK cmb_copy 00:01:53.863 LINK pmr_persistence 00:01:53.863 LINK hello_world 00:01:53.863 LINK hotplug 00:01:53.863 LINK arbitration 00:01:53.863 LINK reconnect 00:01:53.863 LINK abort 00:01:53.863 LINK hello_blob 00:01:54.120 LINK nvme_manage 00:01:54.120 LINK accel_perf 00:01:54.120 CC test/bdev/bdevio/bdevio.o 00:01:54.120 LINK blobcli 00:01:54.120 LINK cuse 00:01:54.376 LINK bdevio 00:01:54.634 CC examples/bdev/hello_world/hello_bdev.o 00:01:54.634 CC examples/bdev/bdevperf/bdevperf.o 00:01:54.892 LINK hello_bdev 00:01:55.151 LINK bdevperf 00:01:55.716 CC examples/nvmf/nvmf/nvmf.o 00:01:55.975 LINK nvmf 00:01:56.542 LINK esnap 00:01:57.108 00:01:57.108 real 0m49.100s 00:01:57.108 user 6m26.643s 00:01:57.108 sys 4m13.822s 00:01:57.108 21:49:36 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:57.108 21:49:36 make -- common/autotest_common.sh@10 -- $ set +x 00:01:57.108 ************************************ 00:01:57.108 END TEST make 00:01:57.108 ************************************ 00:01:57.108 21:49:36 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:57.108 21:49:36 -- pm/common@29 -- $ signal_monitor_resources TERM 00:01:57.108 21:49:36 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:01:57.108 21:49:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.108 21:49:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:57.108 21:49:36 -- pm/common@44 -- $ pid=2386334 00:01:57.108 21:49:36 -- pm/common@50 -- $ kill -TERM 2386334 00:01:57.108 21:49:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.108 21:49:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:57.108 21:49:36 -- pm/common@44 -- $ pid=2386336 00:01:57.108 21:49:36 -- pm/common@50 -- $ kill -TERM 2386336 00:01:57.108 21:49:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.108 21:49:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:57.108 21:49:36 -- pm/common@44 -- $ pid=2386338 00:01:57.108 21:49:36 -- pm/common@50 -- $ kill -TERM 2386338 00:01:57.108 21:49:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.108 21:49:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:57.108 21:49:36 -- pm/common@44 -- $ pid=2386361 00:01:57.108 21:49:36 -- pm/common@50 -- $ sudo -E kill -TERM 2386361 00:01:57.108 21:49:36 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:01:57.108 21:49:36 -- nvmf/common.sh@7 -- # uname -s 00:01:57.108 21:49:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:57.108 21:49:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:57.108 21:49:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:57.108 21:49:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:57.108 21:49:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:57.108 21:49:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:57.108 21:49:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:57.108 21:49:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:57.108 21:49:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:57.108 21:49:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:57.108 21:49:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:01:57.108 21:49:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:01:57.109 21:49:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:57.109 21:49:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:57.109 21:49:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:57.109 21:49:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:57.109 21:49:36 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:57.109 21:49:36 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:57.109 21:49:36 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:57.109 21:49:36 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:57.109 21:49:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.109 21:49:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.109 21:49:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.109 21:49:36 -- paths/export.sh@5 -- # export PATH 00:01:57.109 21:49:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.109 21:49:36 -- nvmf/common.sh@47 -- # : 0 00:01:57.109 21:49:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:01:57.109 21:49:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:01:57.109 21:49:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:57.109 21:49:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:57.109 21:49:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:57.109 21:49:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:01:57.109 21:49:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:01:57.109 21:49:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:01:57.109 21:49:36 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:57.109 21:49:36 -- spdk/autotest.sh@32 -- # uname -s 00:01:57.109 21:49:36 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:57.109 21:49:36 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:57.109 21:49:36 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:57.109 21:49:36 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:57.109 21:49:36 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:57.109 21:49:36 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:57.109 21:49:36 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:57.109 21:49:36 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:57.109 21:49:36 -- spdk/autotest.sh@48 -- # udevadm_pid=2447379 00:01:57.109 21:49:36 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:57.109 21:49:36 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:57.109 21:49:36 -- pm/common@17 -- # local monitor 00:01:57.109 21:49:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.109 21:49:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.109 21:49:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.109 21:49:36 -- pm/common@21 -- # date +%s 00:01:57.109 21:49:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.109 21:49:36 -- pm/common@21 -- # date +%s 00:01:57.109 21:49:36 -- pm/common@25 -- # sleep 1 00:01:57.109 21:49:36 -- pm/common@21 -- # date +%s 00:01:57.109 21:49:36 -- pm/common@21 -- # date +%s 00:01:57.109 21:49:36 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721850576 00:01:57.109 21:49:36 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721850576 00:01:57.109 21:49:36 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721850576 00:01:57.109 21:49:36 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721850576 00:01:57.367 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721850576_collect-vmstat.pm.log 00:01:57.367 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721850576_collect-cpu-load.pm.log 00:01:57.367 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721850576_collect-cpu-temp.pm.log 00:01:57.367 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721850576_collect-bmc-pm.bmc.pm.log 00:01:58.301 21:49:37 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:58.301 21:49:37 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:58.301 21:49:37 -- common/autotest_common.sh@724 -- # xtrace_disable 00:01:58.301 21:49:37 -- common/autotest_common.sh@10 -- # set +x 00:01:58.301 21:49:37 -- spdk/autotest.sh@59 -- # create_test_list 00:01:58.301 21:49:37 -- common/autotest_common.sh@748 -- # xtrace_disable 00:01:58.301 21:49:37 -- common/autotest_common.sh@10 -- # set +x 00:01:58.301 21:49:37 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:01:58.301 21:49:37 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:58.301 21:49:37 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:58.301 21:49:37 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:58.301 21:49:37 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:58.301 21:49:37 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:58.301 21:49:37 -- common/autotest_common.sh@1455 -- # uname 00:01:58.301 21:49:37 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:01:58.301 21:49:37 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:58.301 21:49:37 -- common/autotest_common.sh@1475 -- # uname 00:01:58.301 21:49:37 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:01:58.301 21:49:37 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:01:58.301 21:49:37 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:01:58.301 21:49:37 -- spdk/autotest.sh@72 -- # hash lcov 00:01:58.301 21:49:37 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:58.301 21:49:37 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:01:58.301 --rc lcov_branch_coverage=1 00:01:58.301 --rc lcov_function_coverage=1 00:01:58.301 --rc genhtml_branch_coverage=1 00:01:58.301 --rc genhtml_function_coverage=1 00:01:58.301 --rc genhtml_legend=1 00:01:58.301 --rc geninfo_all_blocks=1 00:01:58.301 ' 00:01:58.301 21:49:37 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:01:58.301 --rc lcov_branch_coverage=1 00:01:58.301 --rc lcov_function_coverage=1 00:01:58.301 --rc genhtml_branch_coverage=1 00:01:58.301 --rc genhtml_function_coverage=1 00:01:58.301 --rc genhtml_legend=1 00:01:58.301 --rc geninfo_all_blocks=1 00:01:58.301 ' 00:01:58.301 21:49:37 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:01:58.301 --rc lcov_branch_coverage=1 00:01:58.301 --rc lcov_function_coverage=1 00:01:58.301 --rc genhtml_branch_coverage=1 00:01:58.301 --rc genhtml_function_coverage=1 00:01:58.301 --rc genhtml_legend=1 00:01:58.301 --rc geninfo_all_blocks=1 00:01:58.301 --no-external' 00:01:58.301 21:49:37 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:01:58.301 --rc lcov_branch_coverage=1 00:01:58.301 --rc lcov_function_coverage=1 00:01:58.301 --rc genhtml_branch_coverage=1 00:01:58.301 --rc genhtml_function_coverage=1 00:01:58.301 --rc genhtml_legend=1 00:01:58.301 --rc geninfo_all_blocks=1 00:01:58.301 --no-external' 00:01:58.301 21:49:37 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:58.301 lcov: LCOV version 1.14 00:01:58.301 21:49:37 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:10.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:10.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:20.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:20.491 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:20.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:20.491 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:20.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:20.491 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:20.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:20.491 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:20.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:20.491 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:20.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:20.491 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:20.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:20.491 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:20.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:20.491 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:20.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:20.491 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:20.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:20.491 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:20.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:20.491 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:20.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:20.491 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:20.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:20.491 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:20.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:20.491 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:20.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:20.491 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:20.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:20.491 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:20.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:20.491 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:20.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:20.491 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:20.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:20.491 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:20.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:20.491 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:20.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:20.491 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:20.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:20.491 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:20.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:20.491 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:20.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:20.491 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:20.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:20.491 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:20.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:20.491 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:20.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:20.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:20.493 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:20.493 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:20.493 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:20.493 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:20.493 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:20.493 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:20.493 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:20.493 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:20.493 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:20.493 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:20.493 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:20.493 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:20.493 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:20.493 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:20.493 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:20.493 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:20.493 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:20.493 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:20.493 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:20.493 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:20.493 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:20.493 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:20.493 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:23.776 21:50:02 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:23.776 21:50:02 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:23.776 21:50:02 -- common/autotest_common.sh@10 -- # set +x 00:02:23.776 21:50:02 -- spdk/autotest.sh@91 -- # rm -f 00:02:23.776 21:50:02 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:27.085 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:27.085 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:27.085 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:27.085 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:27.085 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:27.085 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:27.085 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:27.085 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:27.085 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:27.085 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:27.085 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:27.085 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:27.085 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:27.085 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:27.085 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:27.085 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:27.085 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:02:27.085 21:50:06 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:27.085 21:50:06 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:27.085 21:50:06 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:27.085 21:50:06 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:27.085 21:50:06 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:27.085 21:50:06 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:27.085 21:50:06 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:27.085 21:50:06 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:27.085 21:50:06 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:27.085 21:50:06 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:27.085 21:50:06 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:27.085 21:50:06 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:27.085 21:50:06 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:27.085 21:50:06 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:27.085 21:50:06 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:27.085 No valid GPT data, bailing 00:02:27.085 21:50:06 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:27.085 21:50:06 -- scripts/common.sh@391 -- # pt= 00:02:27.085 21:50:06 -- scripts/common.sh@392 -- # return 1 00:02:27.085 21:50:06 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:27.085 1+0 records in 00:02:27.085 1+0 records out 00:02:27.085 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00548151 s, 191 MB/s 00:02:27.085 21:50:06 -- spdk/autotest.sh@118 -- # sync 00:02:27.085 21:50:06 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:27.085 21:50:06 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:27.085 21:50:06 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:35.191 21:50:13 -- spdk/autotest.sh@124 -- # uname -s 00:02:35.191 21:50:13 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:35.191 21:50:13 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:35.192 21:50:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:35.192 21:50:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:35.192 21:50:13 -- common/autotest_common.sh@10 -- # set +x 00:02:35.192 ************************************ 00:02:35.192 START TEST setup.sh 00:02:35.192 ************************************ 00:02:35.192 21:50:13 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:35.192 * Looking for test storage... 00:02:35.192 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:35.192 21:50:13 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:35.192 21:50:13 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:35.192 21:50:13 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:35.192 21:50:13 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:35.192 21:50:13 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:35.192 21:50:13 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:35.192 ************************************ 00:02:35.192 START TEST acl 00:02:35.192 ************************************ 00:02:35.192 21:50:13 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:35.192 * Looking for test storage... 00:02:35.192 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:35.192 21:50:13 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:35.192 21:50:13 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:35.192 21:50:13 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:35.192 21:50:13 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:35.192 21:50:13 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:35.192 21:50:13 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:35.192 21:50:13 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:35.192 21:50:13 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:35.192 21:50:13 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:35.192 21:50:13 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:35.192 21:50:13 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:35.192 21:50:13 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:35.192 21:50:13 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:35.192 21:50:13 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:35.192 21:50:13 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:35.192 21:50:13 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:38.470 21:50:17 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:38.470 21:50:17 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:38.470 21:50:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.470 21:50:17 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:38.470 21:50:17 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:38.470 21:50:17 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:40.997 Hugepages 00:02:40.997 node hugesize free / total 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:40.997 00:02:40.997 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:40.997 21:50:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.256 21:50:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:41.256 21:50:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.256 21:50:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.256 21:50:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.256 21:50:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:41.256 21:50:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.256 21:50:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.256 21:50:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.256 21:50:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:41.256 21:50:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.256 21:50:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.256 21:50:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.256 21:50:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:41.256 21:50:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.256 21:50:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.256 21:50:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.256 21:50:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:02:41.256 21:50:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:41.256 21:50:20 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:02:41.256 21:50:20 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:41.256 21:50:20 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:41.256 21:50:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.256 21:50:20 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:41.256 21:50:20 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:41.256 21:50:20 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:41.256 21:50:20 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:41.256 21:50:20 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:41.256 ************************************ 00:02:41.256 START TEST denied 00:02:41.256 ************************************ 00:02:41.256 21:50:20 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:02:41.256 21:50:20 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:02:41.256 21:50:20 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:41.256 21:50:20 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:02:41.256 21:50:20 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:41.256 21:50:20 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:45.478 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:02:45.478 21:50:23 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:02:45.478 21:50:23 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:45.478 21:50:23 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:45.478 21:50:23 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:02:45.478 21:50:23 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:02:45.478 21:50:23 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:45.478 21:50:23 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:45.478 21:50:23 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:45.478 21:50:23 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:45.478 21:50:23 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:49.678 00:02:49.678 real 0m7.933s 00:02:49.678 user 0m2.423s 00:02:49.678 sys 0m4.804s 00:02:49.678 21:50:28 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:49.678 21:50:28 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:49.678 ************************************ 00:02:49.678 END TEST denied 00:02:49.678 ************************************ 00:02:49.678 21:50:28 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:49.678 21:50:28 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:49.678 21:50:28 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:49.678 21:50:28 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:49.678 ************************************ 00:02:49.678 START TEST allowed 00:02:49.678 ************************************ 00:02:49.678 21:50:28 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:02:49.678 21:50:28 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:02:49.678 21:50:28 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:49.678 21:50:28 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:02:49.678 21:50:28 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:49.678 21:50:28 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:53.877 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:02:53.877 21:50:33 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:02:53.877 21:50:33 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:53.877 21:50:33 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:53.877 21:50:33 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:53.877 21:50:33 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:58.079 00:02:58.079 real 0m8.208s 00:02:58.079 user 0m2.236s 00:02:58.079 sys 0m4.563s 00:02:58.079 21:50:36 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:58.079 21:50:36 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:02:58.079 ************************************ 00:02:58.079 END TEST allowed 00:02:58.079 ************************************ 00:02:58.079 00:02:58.079 real 0m23.291s 00:02:58.079 user 0m7.043s 00:02:58.079 sys 0m14.317s 00:02:58.079 21:50:36 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:58.079 21:50:36 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:58.079 ************************************ 00:02:58.079 END TEST acl 00:02:58.079 ************************************ 00:02:58.079 21:50:36 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:58.079 21:50:36 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:58.079 21:50:36 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:58.079 21:50:36 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:58.079 ************************************ 00:02:58.079 START TEST hugepages 00:02:58.079 ************************************ 00:02:58.079 21:50:36 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:58.079 * Looking for test storage... 00:02:58.079 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 41656456 kB' 'MemAvailable: 45561260 kB' 'Buffers: 2704 kB' 'Cached: 10355660 kB' 'SwapCached: 0 kB' 'Active: 7206624 kB' 'Inactive: 3676148 kB' 'Active(anon): 6816760 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527868 kB' 'Mapped: 218668 kB' 'Shmem: 6292352 kB' 'KReclaimable: 486284 kB' 'Slab: 1116856 kB' 'SReclaimable: 486284 kB' 'SUnreclaim: 630572 kB' 'KernelStack: 22144 kB' 'PageTables: 8848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439048 kB' 'Committed_AS: 8241992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216388 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3102068 kB' 'DirectMap2M: 14409728 kB' 'DirectMap1G: 51380224 kB' 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.079 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.080 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:58.081 21:50:36 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:58.081 21:50:36 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:58.081 21:50:36 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:58.081 21:50:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:58.081 ************************************ 00:02:58.081 START TEST default_setup 00:02:58.081 ************************************ 00:02:58.081 21:50:36 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:02:58.081 21:50:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:58.081 21:50:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:02:58.081 21:50:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:58.081 21:50:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:02:58.081 21:50:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:58.081 21:50:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:02:58.081 21:50:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:58.081 21:50:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:58.081 21:50:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:58.081 21:50:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:58.081 21:50:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:02:58.081 21:50:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:58.081 21:50:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:58.081 21:50:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:58.081 21:50:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:58.081 21:50:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:58.081 21:50:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:58.081 21:50:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:58.081 21:50:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:02:58.081 21:50:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:02:58.081 21:50:36 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:02:58.081 21:50:36 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:01.379 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:01.379 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:01.379 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:01.379 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:01.379 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:01.379 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:01.379 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:01.379 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:01.379 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:01.379 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:01.379 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:01.379 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:01.379 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:01.379 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:01.379 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:01.379 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:02.764 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43824852 kB' 'MemAvailable: 47729656 kB' 'Buffers: 2704 kB' 'Cached: 10355776 kB' 'SwapCached: 0 kB' 'Active: 7222544 kB' 'Inactive: 3676148 kB' 'Active(anon): 6832680 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543808 kB' 'Mapped: 219476 kB' 'Shmem: 6292468 kB' 'KReclaimable: 486284 kB' 'Slab: 1115700 kB' 'SReclaimable: 486284 kB' 'SUnreclaim: 629416 kB' 'KernelStack: 22192 kB' 'PageTables: 8640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8257612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216516 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3102068 kB' 'DirectMap2M: 14409728 kB' 'DirectMap1G: 51380224 kB' 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.764 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.765 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43827116 kB' 'MemAvailable: 47731920 kB' 'Buffers: 2704 kB' 'Cached: 10355780 kB' 'SwapCached: 0 kB' 'Active: 7226916 kB' 'Inactive: 3676148 kB' 'Active(anon): 6837052 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 548148 kB' 'Mapped: 219384 kB' 'Shmem: 6292472 kB' 'KReclaimable: 486284 kB' 'Slab: 1115772 kB' 'SReclaimable: 486284 kB' 'SUnreclaim: 629488 kB' 'KernelStack: 22352 kB' 'PageTables: 9100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8261736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216552 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3102068 kB' 'DirectMap2M: 14409728 kB' 'DirectMap1G: 51380224 kB' 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.766 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.767 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43825364 kB' 'MemAvailable: 47730168 kB' 'Buffers: 2704 kB' 'Cached: 10355796 kB' 'SwapCached: 0 kB' 'Active: 7221424 kB' 'Inactive: 3676148 kB' 'Active(anon): 6831560 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542484 kB' 'Mapped: 218616 kB' 'Shmem: 6292488 kB' 'KReclaimable: 486284 kB' 'Slab: 1115772 kB' 'SReclaimable: 486284 kB' 'SUnreclaim: 629488 kB' 'KernelStack: 22272 kB' 'PageTables: 8740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8255484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216532 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3102068 kB' 'DirectMap2M: 14409728 kB' 'DirectMap1G: 51380224 kB' 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.768 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.769 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:02.770 nr_hugepages=1024 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:02.770 resv_hugepages=0 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:02.770 surplus_hugepages=0 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:02.770 anon_hugepages=0 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43826188 kB' 'MemAvailable: 47730992 kB' 'Buffers: 2704 kB' 'Cached: 10355800 kB' 'SwapCached: 0 kB' 'Active: 7221344 kB' 'Inactive: 3676148 kB' 'Active(anon): 6831480 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542368 kB' 'Mapped: 218616 kB' 'Shmem: 6292492 kB' 'KReclaimable: 486284 kB' 'Slab: 1115772 kB' 'SReclaimable: 486284 kB' 'SUnreclaim: 629488 kB' 'KernelStack: 22256 kB' 'PageTables: 8876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8255660 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216548 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3102068 kB' 'DirectMap2M: 14409728 kB' 'DirectMap1G: 51380224 kB' 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.770 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.771 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 27299060 kB' 'MemUsed: 5293024 kB' 'SwapCached: 0 kB' 'Active: 1381344 kB' 'Inactive: 275688 kB' 'Active(anon): 1221492 kB' 'Inactive(anon): 0 kB' 'Active(file): 159852 kB' 'Inactive(file): 275688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1496888 kB' 'Mapped: 93888 kB' 'AnonPages: 163436 kB' 'Shmem: 1061348 kB' 'KernelStack: 12456 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 153060 kB' 'Slab: 427432 kB' 'SReclaimable: 153060 kB' 'SUnreclaim: 274372 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.772 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:02.773 node0=1024 expecting 1024 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:02.773 00:03:02.773 real 0m5.029s 00:03:02.773 user 0m1.305s 00:03:02.773 sys 0m2.200s 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:02.773 21:50:41 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:02.773 ************************************ 00:03:02.773 END TEST default_setup 00:03:02.773 ************************************ 00:03:03.033 21:50:42 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:03.033 21:50:42 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:03.033 21:50:42 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:03.033 21:50:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:03.033 ************************************ 00:03:03.033 START TEST per_node_1G_alloc 00:03:03.033 ************************************ 00:03:03.033 21:50:42 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:03:03.033 21:50:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:03.033 21:50:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:03.033 21:50:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:03.033 21:50:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:03.033 21:50:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:03.033 21:50:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:03.033 21:50:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:03.033 21:50:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:03.033 21:50:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:03.033 21:50:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:03.033 21:50:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:03.033 21:50:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:03.033 21:50:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:03.033 21:50:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:03.033 21:50:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:03.033 21:50:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:03.033 21:50:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:03.033 21:50:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:03.033 21:50:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:03.033 21:50:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:03.033 21:50:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:03.033 21:50:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:03.033 21:50:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:03.033 21:50:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:03.033 21:50:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:03.033 21:50:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:03.033 21:50:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:05.566 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:05.566 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:05.566 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:05.566 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:05.566 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:05.566 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:05.566 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:05.566 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:05.566 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:05.566 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:05.566 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:05.566 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:05.566 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:05.566 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:05.567 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:05.906 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:05.906 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43848748 kB' 'MemAvailable: 47753552 kB' 'Buffers: 2704 kB' 'Cached: 10355924 kB' 'SwapCached: 0 kB' 'Active: 7221164 kB' 'Inactive: 3676148 kB' 'Active(anon): 6831300 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542020 kB' 'Mapped: 217532 kB' 'Shmem: 6292616 kB' 'KReclaimable: 486284 kB' 'Slab: 1115228 kB' 'SReclaimable: 486284 kB' 'SUnreclaim: 628944 kB' 'KernelStack: 22144 kB' 'PageTables: 8632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8247488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216564 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3102068 kB' 'DirectMap2M: 14409728 kB' 'DirectMap1G: 51380224 kB' 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.906 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.907 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43847236 kB' 'MemAvailable: 47752040 kB' 'Buffers: 2704 kB' 'Cached: 10355928 kB' 'SwapCached: 0 kB' 'Active: 7220680 kB' 'Inactive: 3676148 kB' 'Active(anon): 6830816 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541496 kB' 'Mapped: 217500 kB' 'Shmem: 6292620 kB' 'KReclaimable: 486284 kB' 'Slab: 1115300 kB' 'SReclaimable: 486284 kB' 'SUnreclaim: 629016 kB' 'KernelStack: 22224 kB' 'PageTables: 8644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8248832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216596 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3102068 kB' 'DirectMap2M: 14409728 kB' 'DirectMap1G: 51380224 kB' 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.908 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.909 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43846048 kB' 'MemAvailable: 47750852 kB' 'Buffers: 2704 kB' 'Cached: 10355944 kB' 'SwapCached: 0 kB' 'Active: 7221416 kB' 'Inactive: 3676148 kB' 'Active(anon): 6831552 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542216 kB' 'Mapped: 217508 kB' 'Shmem: 6292636 kB' 'KReclaimable: 486284 kB' 'Slab: 1115300 kB' 'SReclaimable: 486284 kB' 'SUnreclaim: 629016 kB' 'KernelStack: 22304 kB' 'PageTables: 8716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8248852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216628 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3102068 kB' 'DirectMap2M: 14409728 kB' 'DirectMap1G: 51380224 kB' 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.910 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.911 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:05.912 nr_hugepages=1024 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:05.912 resv_hugepages=0 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:05.912 surplus_hugepages=0 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:05.912 anon_hugepages=0 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43848048 kB' 'MemAvailable: 47752852 kB' 'Buffers: 2704 kB' 'Cached: 10355968 kB' 'SwapCached: 0 kB' 'Active: 7221204 kB' 'Inactive: 3676148 kB' 'Active(anon): 6831340 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542008 kB' 'Mapped: 217500 kB' 'Shmem: 6292660 kB' 'KReclaimable: 486284 kB' 'Slab: 1115300 kB' 'SReclaimable: 486284 kB' 'SUnreclaim: 629016 kB' 'KernelStack: 22208 kB' 'PageTables: 8656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8248876 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216644 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3102068 kB' 'DirectMap2M: 14409728 kB' 'DirectMap1G: 51380224 kB' 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.912 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.913 21:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.913 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.913 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.913 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.913 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.913 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.913 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.913 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.913 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.913 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.913 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.913 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.913 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.913 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.913 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.913 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.913 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.913 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.913 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.913 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.913 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.913 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.913 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.913 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.913 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.913 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.913 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.913 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.913 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.913 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.913 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.913 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.913 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.913 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.913 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.913 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.913 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.913 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.913 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.913 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.913 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.913 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 28361880 kB' 'MemUsed: 4230204 kB' 'SwapCached: 0 kB' 'Active: 1378772 kB' 'Inactive: 275688 kB' 'Active(anon): 1218920 kB' 'Inactive(anon): 0 kB' 'Active(file): 159852 kB' 'Inactive(file): 275688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1497000 kB' 'Mapped: 93036 kB' 'AnonPages: 160612 kB' 'Shmem: 1061460 kB' 'KernelStack: 12392 kB' 'PageTables: 3992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 153060 kB' 'Slab: 427404 kB' 'SReclaimable: 153060 kB' 'SUnreclaim: 274344 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.914 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.915 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703108 kB' 'MemFree: 15489488 kB' 'MemUsed: 12213620 kB' 'SwapCached: 0 kB' 'Active: 5841820 kB' 'Inactive: 3400460 kB' 'Active(anon): 5611808 kB' 'Inactive(anon): 0 kB' 'Active(file): 230012 kB' 'Inactive(file): 3400460 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8861696 kB' 'Mapped: 124464 kB' 'AnonPages: 380700 kB' 'Shmem: 5231224 kB' 'KernelStack: 9736 kB' 'PageTables: 4572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 333224 kB' 'Slab: 687880 kB' 'SReclaimable: 333224 kB' 'SUnreclaim: 354656 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.916 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:05.917 node0=512 expecting 512 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:05.917 node1=512 expecting 512 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:05.917 00:03:05.917 real 0m3.021s 00:03:05.917 user 0m1.031s 00:03:05.917 sys 0m1.965s 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:05.917 21:50:45 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:05.917 ************************************ 00:03:05.917 END TEST per_node_1G_alloc 00:03:05.917 ************************************ 00:03:05.917 21:50:45 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:05.917 21:50:45 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:05.917 21:50:45 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:05.917 21:50:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:06.177 ************************************ 00:03:06.177 START TEST even_2G_alloc 00:03:06.177 ************************************ 00:03:06.177 21:50:45 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:03:06.177 21:50:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:06.177 21:50:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:06.177 21:50:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:06.177 21:50:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:06.177 21:50:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:06.177 21:50:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:06.177 21:50:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:06.177 21:50:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:06.177 21:50:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:06.177 21:50:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:06.177 21:50:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:06.177 21:50:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:06.177 21:50:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:06.177 21:50:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:06.177 21:50:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:06.177 21:50:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:06.177 21:50:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:06.177 21:50:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:06.177 21:50:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:06.178 21:50:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:06.178 21:50:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:06.178 21:50:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:06.178 21:50:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:06.178 21:50:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:06.178 21:50:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:06.178 21:50:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:06.178 21:50:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:06.178 21:50:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:09.472 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:09.472 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:09.472 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:09.472 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:09.472 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:09.472 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:09.472 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:09.472 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:09.472 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:09.472 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:09.472 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:09.472 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:09.472 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:09.472 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:09.472 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:09.472 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:09.472 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43838824 kB' 'MemAvailable: 47743628 kB' 'Buffers: 2704 kB' 'Cached: 10356088 kB' 'SwapCached: 0 kB' 'Active: 7221708 kB' 'Inactive: 3676148 kB' 'Active(anon): 6831844 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541812 kB' 'Mapped: 217640 kB' 'Shmem: 6292780 kB' 'KReclaimable: 486284 kB' 'Slab: 1114196 kB' 'SReclaimable: 486284 kB' 'SUnreclaim: 627912 kB' 'KernelStack: 22224 kB' 'PageTables: 8772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8246884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216644 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3102068 kB' 'DirectMap2M: 14409728 kB' 'DirectMap1G: 51380224 kB' 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.472 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.473 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43837908 kB' 'MemAvailable: 47742712 kB' 'Buffers: 2704 kB' 'Cached: 10356092 kB' 'SwapCached: 0 kB' 'Active: 7221740 kB' 'Inactive: 3676148 kB' 'Active(anon): 6831876 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542372 kB' 'Mapped: 217500 kB' 'Shmem: 6292784 kB' 'KReclaimable: 486284 kB' 'Slab: 1114160 kB' 'SReclaimable: 486284 kB' 'SUnreclaim: 627876 kB' 'KernelStack: 22240 kB' 'PageTables: 8812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8246904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216612 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3102068 kB' 'DirectMap2M: 14409728 kB' 'DirectMap1G: 51380224 kB' 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.474 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.475 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43838160 kB' 'MemAvailable: 47742964 kB' 'Buffers: 2704 kB' 'Cached: 10356108 kB' 'SwapCached: 0 kB' 'Active: 7221384 kB' 'Inactive: 3676148 kB' 'Active(anon): 6831520 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542044 kB' 'Mapped: 217500 kB' 'Shmem: 6292800 kB' 'KReclaimable: 486284 kB' 'Slab: 1114160 kB' 'SReclaimable: 486284 kB' 'SUnreclaim: 627876 kB' 'KernelStack: 22240 kB' 'PageTables: 8764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8246924 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216612 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3102068 kB' 'DirectMap2M: 14409728 kB' 'DirectMap1G: 51380224 kB' 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.476 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.477 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:09.478 nr_hugepages=1024 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:09.478 resv_hugepages=0 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:09.478 surplus_hugepages=0 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:09.478 anon_hugepages=0 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43838880 kB' 'MemAvailable: 47743684 kB' 'Buffers: 2704 kB' 'Cached: 10356132 kB' 'SwapCached: 0 kB' 'Active: 7221392 kB' 'Inactive: 3676148 kB' 'Active(anon): 6831528 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542052 kB' 'Mapped: 217500 kB' 'Shmem: 6292824 kB' 'KReclaimable: 486284 kB' 'Slab: 1114160 kB' 'SReclaimable: 486284 kB' 'SUnreclaim: 627876 kB' 'KernelStack: 22240 kB' 'PageTables: 8764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8246948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216612 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3102068 kB' 'DirectMap2M: 14409728 kB' 'DirectMap1G: 51380224 kB' 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.478 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.479 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.741 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.741 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.741 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.741 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.741 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.741 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 28359912 kB' 'MemUsed: 4232172 kB' 'SwapCached: 0 kB' 'Active: 1379716 kB' 'Inactive: 275688 kB' 'Active(anon): 1219864 kB' 'Inactive(anon): 0 kB' 'Active(file): 159852 kB' 'Inactive(file): 275688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1497136 kB' 'Mapped: 93044 kB' 'AnonPages: 161524 kB' 'Shmem: 1061596 kB' 'KernelStack: 12536 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 153060 kB' 'Slab: 426424 kB' 'SReclaimable: 153060 kB' 'SUnreclaim: 273364 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.742 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.743 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703108 kB' 'MemFree: 15479956 kB' 'MemUsed: 12223152 kB' 'SwapCached: 0 kB' 'Active: 5841708 kB' 'Inactive: 3400460 kB' 'Active(anon): 5611696 kB' 'Inactive(anon): 0 kB' 'Active(file): 230012 kB' 'Inactive(file): 3400460 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8861724 kB' 'Mapped: 124456 kB' 'AnonPages: 380524 kB' 'Shmem: 5231252 kB' 'KernelStack: 9704 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 333224 kB' 'Slab: 687736 kB' 'SReclaimable: 333224 kB' 'SUnreclaim: 354512 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.744 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:09.745 node0=512 expecting 512 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:09.745 node1=512 expecting 512 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:09.745 00:03:09.745 real 0m3.599s 00:03:09.745 user 0m1.358s 00:03:09.745 sys 0m2.307s 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:09.745 21:50:48 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:09.745 ************************************ 00:03:09.745 END TEST even_2G_alloc 00:03:09.745 ************************************ 00:03:09.745 21:50:48 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:09.745 21:50:48 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:09.745 21:50:48 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:09.745 21:50:48 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:09.745 ************************************ 00:03:09.745 START TEST odd_alloc 00:03:09.745 ************************************ 00:03:09.745 21:50:48 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:03:09.745 21:50:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:09.745 21:50:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:09.745 21:50:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:09.745 21:50:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:09.745 21:50:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:09.745 21:50:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:09.745 21:50:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:09.745 21:50:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:09.745 21:50:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:09.745 21:50:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:09.745 21:50:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:09.745 21:50:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:09.745 21:50:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:09.745 21:50:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:09.745 21:50:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:09.745 21:50:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:09.745 21:50:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:09.745 21:50:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:09.745 21:50:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:09.745 21:50:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:09.745 21:50:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:09.745 21:50:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:09.745 21:50:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:09.745 21:50:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:09.745 21:50:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:09.745 21:50:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:09.745 21:50:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.745 21:50:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:13.045 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:13.045 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:13.045 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:13.045 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:13.045 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:13.045 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:13.045 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:13.045 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:13.045 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:13.045 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:13.045 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:13.045 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:13.045 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:13.045 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:13.045 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:13.045 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:13.045 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:13.045 21:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:13.045 21:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:13.045 21:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:13.045 21:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:13.045 21:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:13.045 21:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:13.045 21:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:13.045 21:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:13.045 21:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:13.045 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:13.045 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:13.045 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:13.045 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.045 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.045 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.045 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.045 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.045 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.045 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.045 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.045 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43847664 kB' 'MemAvailable: 47752468 kB' 'Buffers: 2704 kB' 'Cached: 10356252 kB' 'SwapCached: 0 kB' 'Active: 7223508 kB' 'Inactive: 3676148 kB' 'Active(anon): 6833644 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543708 kB' 'Mapped: 217620 kB' 'Shmem: 6292944 kB' 'KReclaimable: 486284 kB' 'Slab: 1114528 kB' 'SReclaimable: 486284 kB' 'SUnreclaim: 628244 kB' 'KernelStack: 22160 kB' 'PageTables: 8608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 8247720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216628 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3102068 kB' 'DirectMap2M: 14409728 kB' 'DirectMap1G: 51380224 kB' 00:03:13.045 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.045 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.045 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.045 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.045 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.045 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.045 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.045 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.045 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.045 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.045 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.045 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.045 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.045 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.045 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.045 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.045 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.045 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.045 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.045 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.045 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.045 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.045 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.045 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.045 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.046 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43847868 kB' 'MemAvailable: 47752672 kB' 'Buffers: 2704 kB' 'Cached: 10356252 kB' 'SwapCached: 0 kB' 'Active: 7223448 kB' 'Inactive: 3676148 kB' 'Active(anon): 6833584 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543592 kB' 'Mapped: 217572 kB' 'Shmem: 6292944 kB' 'KReclaimable: 486284 kB' 'Slab: 1114488 kB' 'SReclaimable: 486284 kB' 'SUnreclaim: 628204 kB' 'KernelStack: 22144 kB' 'PageTables: 8548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 8247740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216580 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3102068 kB' 'DirectMap2M: 14409728 kB' 'DirectMap1G: 51380224 kB' 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.047 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:13.048 21:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43848372 kB' 'MemAvailable: 47753176 kB' 'Buffers: 2704 kB' 'Cached: 10356252 kB' 'SwapCached: 0 kB' 'Active: 7223488 kB' 'Inactive: 3676148 kB' 'Active(anon): 6833624 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543552 kB' 'Mapped: 217572 kB' 'Shmem: 6292944 kB' 'KReclaimable: 486284 kB' 'Slab: 1114488 kB' 'SReclaimable: 486284 kB' 'SUnreclaim: 628204 kB' 'KernelStack: 22176 kB' 'PageTables: 8640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 8247760 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216564 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3102068 kB' 'DirectMap2M: 14409728 kB' 'DirectMap1G: 51380224 kB' 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.049 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.050 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:13.051 nr_hugepages=1025 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:13.051 resv_hugepages=0 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:13.051 surplus_hugepages=0 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:13.051 anon_hugepages=0 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43848480 kB' 'MemAvailable: 47753284 kB' 'Buffers: 2704 kB' 'Cached: 10356292 kB' 'SwapCached: 0 kB' 'Active: 7222732 kB' 'Inactive: 3676148 kB' 'Active(anon): 6832868 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543192 kB' 'Mapped: 217504 kB' 'Shmem: 6292984 kB' 'KReclaimable: 486284 kB' 'Slab: 1114476 kB' 'SReclaimable: 486284 kB' 'SUnreclaim: 628192 kB' 'KernelStack: 22176 kB' 'PageTables: 8632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 8247780 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216564 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3102068 kB' 'DirectMap2M: 14409728 kB' 'DirectMap1G: 51380224 kB' 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.051 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:13.052 21:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 28360132 kB' 'MemUsed: 4231952 kB' 'SwapCached: 0 kB' 'Active: 1382176 kB' 'Inactive: 275688 kB' 'Active(anon): 1222324 kB' 'Inactive(anon): 0 kB' 'Active(file): 159852 kB' 'Inactive(file): 275688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1497260 kB' 'Mapped: 93056 kB' 'AnonPages: 163808 kB' 'Shmem: 1061720 kB' 'KernelStack: 12472 kB' 'PageTables: 4148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 153060 kB' 'Slab: 426816 kB' 'SReclaimable: 153060 kB' 'SUnreclaim: 273756 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.053 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703108 kB' 'MemFree: 15489048 kB' 'MemUsed: 12214060 kB' 'SwapCached: 0 kB' 'Active: 5841200 kB' 'Inactive: 3400460 kB' 'Active(anon): 5611188 kB' 'Inactive(anon): 0 kB' 'Active(file): 230012 kB' 'Inactive(file): 3400460 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8861756 kB' 'Mapped: 124448 kB' 'AnonPages: 380040 kB' 'Shmem: 5231284 kB' 'KernelStack: 9704 kB' 'PageTables: 4540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 333224 kB' 'Slab: 687660 kB' 'SReclaimable: 333224 kB' 'SUnreclaim: 354436 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.054 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:13.055 node0=512 expecting 513 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:13.055 node1=513 expecting 512 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:13.055 00:03:13.055 real 0m3.337s 00:03:13.055 user 0m1.196s 00:03:13.055 sys 0m2.176s 00:03:13.055 21:50:52 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:13.056 21:50:52 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:13.056 ************************************ 00:03:13.056 END TEST odd_alloc 00:03:13.056 ************************************ 00:03:13.056 21:50:52 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:13.056 21:50:52 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:13.056 21:50:52 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:13.056 21:50:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:13.056 ************************************ 00:03:13.056 START TEST custom_alloc 00:03:13.056 ************************************ 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:13.056 21:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:16.350 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:16.350 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:16.350 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:16.350 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:16.350 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:16.350 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:16.350 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:16.350 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:16.350 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:16.350 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:16.350 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:16.350 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:16.350 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:16.350 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:16.350 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:16.350 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:16.350 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42827128 kB' 'MemAvailable: 46731932 kB' 'Buffers: 2704 kB' 'Cached: 10356416 kB' 'SwapCached: 0 kB' 'Active: 7223920 kB' 'Inactive: 3676148 kB' 'Active(anon): 6834056 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543900 kB' 'Mapped: 217596 kB' 'Shmem: 6293108 kB' 'KReclaimable: 486284 kB' 'Slab: 1114560 kB' 'SReclaimable: 486284 kB' 'SUnreclaim: 628276 kB' 'KernelStack: 22160 kB' 'PageTables: 8592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 8248524 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216708 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3102068 kB' 'DirectMap2M: 14409728 kB' 'DirectMap1G: 51380224 kB' 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.350 21:50:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.350 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.350 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.350 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.350 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.351 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42827300 kB' 'MemAvailable: 46732104 kB' 'Buffers: 2704 kB' 'Cached: 10356420 kB' 'SwapCached: 0 kB' 'Active: 7223300 kB' 'Inactive: 3676148 kB' 'Active(anon): 6833436 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543748 kB' 'Mapped: 217516 kB' 'Shmem: 6293112 kB' 'KReclaimable: 486284 kB' 'Slab: 1114508 kB' 'SReclaimable: 486284 kB' 'SUnreclaim: 628224 kB' 'KernelStack: 22160 kB' 'PageTables: 8576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 8248544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216676 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3102068 kB' 'DirectMap2M: 14409728 kB' 'DirectMap1G: 51380224 kB' 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.352 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.353 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42827300 kB' 'MemAvailable: 46732104 kB' 'Buffers: 2704 kB' 'Cached: 10356436 kB' 'SwapCached: 0 kB' 'Active: 7223544 kB' 'Inactive: 3676148 kB' 'Active(anon): 6833680 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544024 kB' 'Mapped: 217516 kB' 'Shmem: 6293128 kB' 'KReclaimable: 486284 kB' 'Slab: 1114508 kB' 'SReclaimable: 486284 kB' 'SUnreclaim: 628224 kB' 'KernelStack: 22192 kB' 'PageTables: 8676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 8248196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216660 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3102068 kB' 'DirectMap2M: 14409728 kB' 'DirectMap1G: 51380224 kB' 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.354 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.355 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:16.356 nr_hugepages=1536 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:16.356 resv_hugepages=0 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:16.356 surplus_hugepages=0 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:16.356 anon_hugepages=0 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42827552 kB' 'MemAvailable: 46732356 kB' 'Buffers: 2704 kB' 'Cached: 10356456 kB' 'SwapCached: 0 kB' 'Active: 7223824 kB' 'Inactive: 3676148 kB' 'Active(anon): 6833960 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544352 kB' 'Mapped: 217516 kB' 'Shmem: 6293148 kB' 'KReclaimable: 486284 kB' 'Slab: 1114508 kB' 'SReclaimable: 486284 kB' 'SUnreclaim: 628224 kB' 'KernelStack: 22176 kB' 'PageTables: 8628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 8248584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216612 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3102068 kB' 'DirectMap2M: 14409728 kB' 'DirectMap1G: 51380224 kB' 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.356 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.357 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 28377636 kB' 'MemUsed: 4214448 kB' 'SwapCached: 0 kB' 'Active: 1380380 kB' 'Inactive: 275688 kB' 'Active(anon): 1220528 kB' 'Inactive(anon): 0 kB' 'Active(file): 159852 kB' 'Inactive(file): 275688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1497392 kB' 'Mapped: 93068 kB' 'AnonPages: 161832 kB' 'Shmem: 1061852 kB' 'KernelStack: 12440 kB' 'PageTables: 4036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 153060 kB' 'Slab: 426836 kB' 'SReclaimable: 153060 kB' 'SUnreclaim: 273776 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.358 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.359 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703108 kB' 'MemFree: 14449984 kB' 'MemUsed: 13253124 kB' 'SwapCached: 0 kB' 'Active: 5842948 kB' 'Inactive: 3400460 kB' 'Active(anon): 5612936 kB' 'Inactive(anon): 0 kB' 'Active(file): 230012 kB' 'Inactive(file): 3400460 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8861792 kB' 'Mapped: 124448 kB' 'AnonPages: 381868 kB' 'Shmem: 5231320 kB' 'KernelStack: 9720 kB' 'PageTables: 4528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 333224 kB' 'Slab: 687672 kB' 'SReclaimable: 333224 kB' 'SUnreclaim: 354448 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.360 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:16.361 node0=512 expecting 512 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:16.361 node1=1024 expecting 1024 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:16.361 00:03:16.361 real 0m2.958s 00:03:16.361 user 0m0.960s 00:03:16.361 sys 0m1.964s 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:16.361 21:50:55 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:16.361 ************************************ 00:03:16.361 END TEST custom_alloc 00:03:16.361 ************************************ 00:03:16.361 21:50:55 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:16.361 21:50:55 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:16.361 21:50:55 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:16.361 21:50:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:16.361 ************************************ 00:03:16.361 START TEST no_shrink_alloc 00:03:16.361 ************************************ 00:03:16.361 21:50:55 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:03:16.361 21:50:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:16.361 21:50:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:16.361 21:50:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:16.361 21:50:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:16.361 21:50:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:16.361 21:50:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:16.361 21:50:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:16.361 21:50:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:16.361 21:50:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:16.361 21:50:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:16.361 21:50:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:16.361 21:50:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:16.361 21:50:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:16.361 21:50:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:16.361 21:50:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:16.361 21:50:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:16.361 21:50:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:16.361 21:50:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:16.361 21:50:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:16.361 21:50:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:16.361 21:50:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:16.361 21:50:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:19.659 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:19.659 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:19.659 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:19.659 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:19.659 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:19.659 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:19.659 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:19.659 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:19.659 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:19.659 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:19.659 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:19.659 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:19.659 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:19.659 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:19.659 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:19.659 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:19.659 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43870736 kB' 'MemAvailable: 47775540 kB' 'Buffers: 2704 kB' 'Cached: 10356584 kB' 'SwapCached: 0 kB' 'Active: 7226048 kB' 'Inactive: 3676148 kB' 'Active(anon): 6836184 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545692 kB' 'Mapped: 218660 kB' 'Shmem: 6293276 kB' 'KReclaimable: 486284 kB' 'Slab: 1114236 kB' 'SReclaimable: 486284 kB' 'SUnreclaim: 627952 kB' 'KernelStack: 22256 kB' 'PageTables: 8776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8283608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216724 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3102068 kB' 'DirectMap2M: 14409728 kB' 'DirectMap1G: 51380224 kB' 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.659 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.660 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43870976 kB' 'MemAvailable: 47775780 kB' 'Buffers: 2704 kB' 'Cached: 10356584 kB' 'SwapCached: 0 kB' 'Active: 7225252 kB' 'Inactive: 3676148 kB' 'Active(anon): 6835388 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545388 kB' 'Mapped: 218516 kB' 'Shmem: 6293276 kB' 'KReclaimable: 486284 kB' 'Slab: 1114220 kB' 'SReclaimable: 486284 kB' 'SUnreclaim: 627936 kB' 'KernelStack: 22240 kB' 'PageTables: 8712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8283624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216724 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3102068 kB' 'DirectMap2M: 14409728 kB' 'DirectMap1G: 51380224 kB' 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.661 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43870976 kB' 'MemAvailable: 47775780 kB' 'Buffers: 2704 kB' 'Cached: 10356584 kB' 'SwapCached: 0 kB' 'Active: 7225296 kB' 'Inactive: 3676148 kB' 'Active(anon): 6835432 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545428 kB' 'Mapped: 218516 kB' 'Shmem: 6293276 kB' 'KReclaimable: 486284 kB' 'Slab: 1114220 kB' 'SReclaimable: 486284 kB' 'SUnreclaim: 627936 kB' 'KernelStack: 22256 kB' 'PageTables: 8764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8283648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216724 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3102068 kB' 'DirectMap2M: 14409728 kB' 'DirectMap1G: 51380224 kB' 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.662 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.663 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:19.664 nr_hugepages=1024 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:19.664 resv_hugepages=0 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:19.664 surplus_hugepages=0 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:19.664 anon_hugepages=0 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.664 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43870752 kB' 'MemAvailable: 47775556 kB' 'Buffers: 2704 kB' 'Cached: 10356624 kB' 'SwapCached: 0 kB' 'Active: 7225312 kB' 'Inactive: 3676148 kB' 'Active(anon): 6835448 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545396 kB' 'Mapped: 218516 kB' 'Shmem: 6293316 kB' 'KReclaimable: 486284 kB' 'Slab: 1114220 kB' 'SReclaimable: 486284 kB' 'SUnreclaim: 627936 kB' 'KernelStack: 22240 kB' 'PageTables: 8712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8283668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216724 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3102068 kB' 'DirectMap2M: 14409728 kB' 'DirectMap1G: 51380224 kB' 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.665 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.666 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 27321952 kB' 'MemUsed: 5270132 kB' 'SwapCached: 0 kB' 'Active: 1381320 kB' 'Inactive: 275688 kB' 'Active(anon): 1221468 kB' 'Inactive(anon): 0 kB' 'Active(file): 159852 kB' 'Inactive(file): 275688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1497524 kB' 'Mapped: 93188 kB' 'AnonPages: 162648 kB' 'Shmem: 1061984 kB' 'KernelStack: 12488 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 153060 kB' 'Slab: 426668 kB' 'SReclaimable: 153060 kB' 'SUnreclaim: 273608 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.667 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.668 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.668 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.668 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.668 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.668 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.668 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.668 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.668 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.668 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.668 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.668 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.668 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.668 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.668 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.668 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.668 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.668 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.668 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.668 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.668 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.668 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.668 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.668 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.668 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.668 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.668 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.668 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.668 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.668 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.668 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.668 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.668 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.668 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.668 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.668 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:19.668 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:19.668 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:19.668 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:19.668 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:19.668 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:19.668 node0=1024 expecting 1024 00:03:19.668 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:19.668 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:19.668 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:19.668 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:19.668 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:19.668 21:50:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:22.966 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:22.966 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:22.966 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:22.966 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:22.966 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:22.966 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:22.966 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:22.966 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:22.966 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:22.966 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:22.966 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:22.966 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:22.966 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:22.966 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:22.966 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:22.966 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:22.966 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:22.966 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:22.966 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:22.966 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:22.966 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:22.966 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:22.966 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:22.966 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:22.966 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:22.966 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:22.966 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:22.966 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:22.966 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:22.966 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:22.966 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.966 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.966 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.966 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.966 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.966 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.966 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.966 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.966 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43813896 kB' 'MemAvailable: 47718700 kB' 'Buffers: 2704 kB' 'Cached: 10356708 kB' 'SwapCached: 0 kB' 'Active: 7232292 kB' 'Inactive: 3676148 kB' 'Active(anon): 6842428 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552340 kB' 'Mapped: 219452 kB' 'Shmem: 6293400 kB' 'KReclaimable: 486284 kB' 'Slab: 1114604 kB' 'SReclaimable: 486284 kB' 'SUnreclaim: 628320 kB' 'KernelStack: 22320 kB' 'PageTables: 9016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8293172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216776 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3102068 kB' 'DirectMap2M: 14409728 kB' 'DirectMap1G: 51380224 kB' 00:03:22.966 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.966 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.966 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.966 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.966 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.966 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.966 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.966 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.966 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.966 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.966 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.966 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.966 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.966 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.966 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.966 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.966 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.966 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.966 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.966 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.966 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.966 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.966 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.966 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.966 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.966 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.967 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43816312 kB' 'MemAvailable: 47721116 kB' 'Buffers: 2704 kB' 'Cached: 10356712 kB' 'SwapCached: 0 kB' 'Active: 7227824 kB' 'Inactive: 3676148 kB' 'Active(anon): 6837960 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547820 kB' 'Mapped: 219020 kB' 'Shmem: 6293404 kB' 'KReclaimable: 486284 kB' 'Slab: 1114568 kB' 'SReclaimable: 486284 kB' 'SUnreclaim: 628284 kB' 'KernelStack: 22320 kB' 'PageTables: 8952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8289352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216724 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3102068 kB' 'DirectMap2M: 14409728 kB' 'DirectMap1G: 51380224 kB' 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.968 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.969 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43810572 kB' 'MemAvailable: 47715376 kB' 'Buffers: 2704 kB' 'Cached: 10356732 kB' 'SwapCached: 0 kB' 'Active: 7231788 kB' 'Inactive: 3676148 kB' 'Active(anon): 6841924 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 551764 kB' 'Mapped: 219360 kB' 'Shmem: 6293424 kB' 'KReclaimable: 486284 kB' 'Slab: 1114560 kB' 'SReclaimable: 486284 kB' 'SUnreclaim: 628276 kB' 'KernelStack: 22304 kB' 'PageTables: 8932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8293348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216696 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3102068 kB' 'DirectMap2M: 14409728 kB' 'DirectMap1G: 51380224 kB' 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.970 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.971 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:22.972 nr_hugepages=1024 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:22.972 resv_hugepages=0 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:22.972 surplus_hugepages=0 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:22.972 anon_hugepages=0 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43830164 kB' 'MemAvailable: 47734968 kB' 'Buffers: 2704 kB' 'Cached: 10356744 kB' 'SwapCached: 0 kB' 'Active: 7226860 kB' 'Inactive: 3676148 kB' 'Active(anon): 6836996 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546904 kB' 'Mapped: 218516 kB' 'Shmem: 6293436 kB' 'KReclaimable: 486284 kB' 'Slab: 1114520 kB' 'SReclaimable: 486284 kB' 'SUnreclaim: 628236 kB' 'KernelStack: 22352 kB' 'PageTables: 9044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8288060 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216692 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3102068 kB' 'DirectMap2M: 14409728 kB' 'DirectMap1G: 51380224 kB' 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.972 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.973 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 27307720 kB' 'MemUsed: 5284364 kB' 'SwapCached: 0 kB' 'Active: 1386732 kB' 'Inactive: 275688 kB' 'Active(anon): 1226880 kB' 'Inactive(anon): 0 kB' 'Active(file): 159852 kB' 'Inactive(file): 275688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1497560 kB' 'Mapped: 93196 kB' 'AnonPages: 168152 kB' 'Shmem: 1062020 kB' 'KernelStack: 12552 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 153060 kB' 'Slab: 426804 kB' 'SReclaimable: 153060 kB' 'SUnreclaim: 273744 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.974 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:22.975 node0=1024 expecting 1024 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:22.975 00:03:22.975 real 0m6.642s 00:03:22.975 user 0m2.403s 00:03:22.975 sys 0m4.330s 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:22.975 21:51:01 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:22.975 ************************************ 00:03:22.975 END TEST no_shrink_alloc 00:03:22.975 ************************************ 00:03:22.975 21:51:01 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:22.975 21:51:01 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:22.975 21:51:01 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:22.975 21:51:01 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:22.976 21:51:01 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:22.976 21:51:01 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:22.976 21:51:01 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:22.976 21:51:01 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:22.976 21:51:01 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:22.976 21:51:01 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:22.976 21:51:01 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:22.976 21:51:01 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:22.976 21:51:01 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:22.976 21:51:01 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:22.976 00:03:22.976 real 0m25.207s 00:03:22.976 user 0m8.506s 00:03:22.976 sys 0m15.354s 00:03:22.976 21:51:01 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:22.976 21:51:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:22.976 ************************************ 00:03:22.976 END TEST hugepages 00:03:22.976 ************************************ 00:03:22.976 21:51:01 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:22.976 21:51:01 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:22.976 21:51:01 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:22.976 21:51:01 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:22.976 ************************************ 00:03:22.976 START TEST driver 00:03:22.976 ************************************ 00:03:22.976 21:51:02 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:22.976 * Looking for test storage... 00:03:22.976 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:22.976 21:51:02 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:22.976 21:51:02 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:22.976 21:51:02 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:28.262 21:51:06 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:28.262 21:51:06 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:28.262 21:51:06 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:28.262 21:51:06 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:28.262 ************************************ 00:03:28.262 START TEST guess_driver 00:03:28.262 ************************************ 00:03:28.262 21:51:06 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:03:28.262 21:51:06 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:28.262 21:51:06 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:28.262 21:51:06 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:28.262 21:51:06 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:28.262 21:51:06 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:28.262 21:51:06 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:28.262 21:51:06 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:28.262 21:51:06 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:28.262 21:51:06 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:28.262 21:51:06 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 176 > 0 )) 00:03:28.262 21:51:06 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:28.262 21:51:06 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:28.262 21:51:06 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:28.262 21:51:06 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:28.262 21:51:06 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:28.262 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:28.262 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:28.262 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:28.262 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:28.262 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:28.262 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:28.262 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:28.262 21:51:06 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:28.262 21:51:06 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:28.262 21:51:06 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:28.262 21:51:06 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:28.262 21:51:06 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:28.262 Looking for driver=vfio-pci 00:03:28.262 21:51:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:28.262 21:51:06 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:28.262 21:51:06 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.262 21:51:06 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:31.555 21:51:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:31.555 21:51:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:31.555 21:51:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:31.555 21:51:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:31.555 21:51:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:31.555 21:51:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:31.555 21:51:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:31.555 21:51:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:31.555 21:51:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:31.555 21:51:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:31.555 21:51:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:31.555 21:51:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:31.555 21:51:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:31.555 21:51:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:31.555 21:51:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:31.555 21:51:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:31.555 21:51:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:31.555 21:51:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:31.555 21:51:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:31.555 21:51:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:31.555 21:51:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:31.555 21:51:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:31.555 21:51:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:31.555 21:51:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:31.555 21:51:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:31.555 21:51:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:31.555 21:51:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:31.555 21:51:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:31.555 21:51:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:31.555 21:51:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:31.555 21:51:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:31.555 21:51:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:31.555 21:51:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:31.555 21:51:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:31.555 21:51:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:31.555 21:51:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:31.555 21:51:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:31.555 21:51:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:31.555 21:51:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:31.555 21:51:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:31.555 21:51:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:31.555 21:51:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:31.555 21:51:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:31.555 21:51:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:31.555 21:51:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:31.555 21:51:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:31.555 21:51:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:31.555 21:51:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.935 21:51:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.935 21:51:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.935 21:51:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.935 21:51:12 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:32.935 21:51:12 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:32.935 21:51:12 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:32.935 21:51:12 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:37.128 00:03:37.128 real 0m9.274s 00:03:37.128 user 0m2.319s 00:03:37.128 sys 0m4.580s 00:03:37.128 21:51:16 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:37.128 21:51:16 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:37.128 ************************************ 00:03:37.128 END TEST guess_driver 00:03:37.128 ************************************ 00:03:37.128 00:03:37.128 real 0m14.176s 00:03:37.128 user 0m3.683s 00:03:37.128 sys 0m7.367s 00:03:37.128 21:51:16 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:37.128 21:51:16 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:37.128 ************************************ 00:03:37.128 END TEST driver 00:03:37.129 ************************************ 00:03:37.129 21:51:16 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:37.129 21:51:16 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:37.129 21:51:16 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:37.129 21:51:16 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:37.129 ************************************ 00:03:37.129 START TEST devices 00:03:37.129 ************************************ 00:03:37.129 21:51:16 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:37.387 * Looking for test storage... 00:03:37.387 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:37.387 21:51:16 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:37.387 21:51:16 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:37.387 21:51:16 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:37.387 21:51:16 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:41.581 21:51:19 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:41.581 21:51:19 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:41.581 21:51:19 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:41.581 21:51:19 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:41.581 21:51:19 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:41.581 21:51:19 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:41.581 21:51:19 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:41.581 21:51:19 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:41.581 21:51:19 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:41.581 21:51:19 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:41.581 21:51:19 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:41.581 21:51:19 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:41.581 21:51:19 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:41.581 21:51:19 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:41.581 21:51:19 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:41.581 21:51:19 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:41.581 21:51:19 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:41.581 21:51:19 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:03:41.581 21:51:19 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:41.581 21:51:19 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:41.581 21:51:19 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:41.581 21:51:19 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:41.581 No valid GPT data, bailing 00:03:41.581 21:51:19 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:41.581 21:51:20 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:41.581 21:51:20 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:41.581 21:51:20 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:41.581 21:51:20 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:41.581 21:51:20 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:41.581 21:51:20 setup.sh.devices -- setup/common.sh@80 -- # echo 1600321314816 00:03:41.581 21:51:20 setup.sh.devices -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:03:41.581 21:51:20 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:41.581 21:51:20 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:03:41.581 21:51:20 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:41.581 21:51:20 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:41.581 21:51:20 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:41.581 21:51:20 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:41.581 21:51:20 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:41.581 21:51:20 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:41.581 ************************************ 00:03:41.581 START TEST nvme_mount 00:03:41.581 ************************************ 00:03:41.581 21:51:20 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:03:41.581 21:51:20 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:41.581 21:51:20 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:41.581 21:51:20 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:41.581 21:51:20 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:41.581 21:51:20 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:41.581 21:51:20 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:41.581 21:51:20 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:41.581 21:51:20 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:41.581 21:51:20 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:41.581 21:51:20 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:41.581 21:51:20 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:41.581 21:51:20 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:41.581 21:51:20 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:41.581 21:51:20 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:41.581 21:51:20 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:41.581 21:51:20 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:41.581 21:51:20 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:41.581 21:51:20 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:41.581 21:51:20 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:42.149 Creating new GPT entries in memory. 00:03:42.149 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:42.149 other utilities. 00:03:42.149 21:51:21 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:42.149 21:51:21 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:42.149 21:51:21 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:42.149 21:51:21 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:42.149 21:51:21 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:43.089 Creating new GPT entries in memory. 00:03:43.089 The operation has completed successfully. 00:03:43.089 21:51:22 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:43.089 21:51:22 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:43.089 21:51:22 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2481929 00:03:43.089 21:51:22 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.089 21:51:22 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:43.089 21:51:22 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.089 21:51:22 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:43.089 21:51:22 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:43.089 21:51:22 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.089 21:51:22 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:43.089 21:51:22 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:43.089 21:51:22 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:43.089 21:51:22 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.089 21:51:22 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:43.089 21:51:22 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:43.089 21:51:22 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:43.089 21:51:22 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:43.089 21:51:22 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:43.089 21:51:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.089 21:51:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:43.089 21:51:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:43.089 21:51:22 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.089 21:51:22 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:45.627 21:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:45.627 21:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.627 21:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:45.627 21:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.627 21:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:45.627 21:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.627 21:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:45.627 21:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.627 21:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:45.627 21:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.627 21:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:45.627 21:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.627 21:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:45.627 21:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.627 21:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:45.627 21:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.627 21:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:45.627 21:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.627 21:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:45.627 21:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.627 21:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:45.627 21:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.627 21:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:45.627 21:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.627 21:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:45.627 21:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.627 21:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:45.627 21:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.627 21:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:45.627 21:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.627 21:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:45.627 21:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.887 21:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:45.887 21:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:45.887 21:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:45.887 21:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.887 21:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:45.887 21:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:45.887 21:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:45.887 21:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:45.887 21:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:45.887 21:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:45.887 21:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:45.887 21:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:45.887 21:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:45.887 21:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:45.887 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:45.887 21:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:45.887 21:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:46.146 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:46.146 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:03:46.146 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:46.146 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:46.147 21:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:46.147 21:51:25 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:46.147 21:51:25 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.147 21:51:25 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:46.147 21:51:25 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:46.147 21:51:25 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.406 21:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:46.407 21:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:46.407 21:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:46.407 21:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.407 21:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:46.407 21:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:46.407 21:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:46.407 21:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:46.407 21:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:46.407 21:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.407 21:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:46.407 21:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:46.407 21:51:25 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.407 21:51:25 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:48.979 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:48.979 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.979 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:48.979 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.979 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:48.979 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.979 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:48.979 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.979 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:48.979 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.979 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:48.979 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.979 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:48.979 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.980 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:48.980 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.980 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:48.980 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.980 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:48.980 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.980 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:48.980 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.980 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:48.980 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.980 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:48.980 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.980 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:48.980 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.980 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:48.980 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.980 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:48.980 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.239 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.239 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:49.239 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:49.239 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.239 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:49.239 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:49.239 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:49.239 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:49.239 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:49.239 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:49.239 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:03:49.239 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:49.239 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:49.239 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:49.239 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:49.239 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:49.239 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:49.239 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:49.239 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.239 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:49.239 21:51:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:49.239 21:51:28 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.239 21:51:28 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:52.535 21:51:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.535 21:51:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.535 21:51:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.535 21:51:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.535 21:51:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.535 21:51:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.535 21:51:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.535 21:51:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.535 21:51:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.535 21:51:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.535 21:51:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.535 21:51:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.535 21:51:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.535 21:51:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.535 21:51:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.535 21:51:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.535 21:51:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.535 21:51:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.535 21:51:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.535 21:51:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.535 21:51:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.535 21:51:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.535 21:51:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.535 21:51:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.535 21:51:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.535 21:51:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.535 21:51:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.535 21:51:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.535 21:51:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.535 21:51:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.536 21:51:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.536 21:51:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.536 21:51:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.536 21:51:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:52.536 21:51:31 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:52.536 21:51:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.536 21:51:31 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:52.536 21:51:31 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:52.536 21:51:31 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:52.536 21:51:31 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:52.536 21:51:31 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:52.536 21:51:31 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:52.536 21:51:31 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:52.536 21:51:31 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:52.536 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:52.536 00:03:52.536 real 0m11.294s 00:03:52.536 user 0m2.969s 00:03:52.536 sys 0m6.074s 00:03:52.536 21:51:31 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:52.536 21:51:31 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:52.536 ************************************ 00:03:52.536 END TEST nvme_mount 00:03:52.536 ************************************ 00:03:52.536 21:51:31 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:52.536 21:51:31 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:52.536 21:51:31 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:52.536 21:51:31 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:52.536 ************************************ 00:03:52.536 START TEST dm_mount 00:03:52.536 ************************************ 00:03:52.536 21:51:31 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:03:52.536 21:51:31 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:52.536 21:51:31 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:52.536 21:51:31 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:52.536 21:51:31 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:52.536 21:51:31 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:52.536 21:51:31 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:52.536 21:51:31 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:52.536 21:51:31 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:52.536 21:51:31 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:52.536 21:51:31 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:52.536 21:51:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:52.536 21:51:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:52.536 21:51:31 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:52.536 21:51:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:52.536 21:51:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:52.536 21:51:31 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:52.536 21:51:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:52.536 21:51:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:52.536 21:51:31 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:52.536 21:51:31 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:52.536 21:51:31 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:53.473 Creating new GPT entries in memory. 00:03:53.473 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:53.473 other utilities. 00:03:53.473 21:51:32 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:53.473 21:51:32 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:53.473 21:51:32 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:53.473 21:51:32 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:53.473 21:51:32 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:54.412 Creating new GPT entries in memory. 00:03:54.412 The operation has completed successfully. 00:03:54.412 21:51:33 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:54.412 21:51:33 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:54.412 21:51:33 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:54.412 21:51:33 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:54.412 21:51:33 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:55.349 The operation has completed successfully. 00:03:55.349 21:51:34 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:55.349 21:51:34 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:55.349 21:51:34 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2486150 00:03:55.349 21:51:34 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:55.349 21:51:34 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:55.349 21:51:34 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:55.349 21:51:34 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:55.609 21:51:34 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:55.609 21:51:34 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:55.609 21:51:34 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:55.609 21:51:34 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:55.609 21:51:34 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:55.609 21:51:34 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:55.609 21:51:34 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:55.609 21:51:34 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:55.609 21:51:34 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:55.609 21:51:34 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:55.609 21:51:34 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:55.609 21:51:34 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:55.609 21:51:34 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:55.609 21:51:34 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:55.609 21:51:34 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:55.609 21:51:34 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:55.609 21:51:34 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:55.609 21:51:34 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:55.609 21:51:34 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:55.609 21:51:34 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:55.609 21:51:34 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:55.609 21:51:34 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:55.609 21:51:34 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:55.609 21:51:34 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:55.609 21:51:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.609 21:51:34 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:55.609 21:51:34 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:55.609 21:51:34 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.609 21:51:34 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:58.903 21:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:58.903 21:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.903 21:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:58.903 21:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.903 21:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:58.903 21:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.903 21:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:58.903 21:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.903 21:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:58.903 21:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.903 21:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:58.903 21:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.903 21:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:58.903 21:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.903 21:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:58.903 21:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.903 21:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:58.903 21:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.903 21:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:58.903 21:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.903 21:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:58.903 21:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.903 21:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:58.903 21:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.903 21:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:58.903 21:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.903 21:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:58.903 21:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.903 21:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:58.903 21:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.903 21:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:58.903 21:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.903 21:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:58.903 21:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:58.903 21:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:58.903 21:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.903 21:51:38 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:58.903 21:51:38 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:58.903 21:51:38 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:58.903 21:51:38 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:58.903 21:51:38 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:58.903 21:51:38 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:58.903 21:51:38 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:58.903 21:51:38 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:58.903 21:51:38 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:58.903 21:51:38 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:58.903 21:51:38 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:58.903 21:51:38 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:58.903 21:51:38 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:58.903 21:51:38 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:58.903 21:51:38 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:58.903 21:51:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.903 21:51:38 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:58.903 21:51:38 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.903 21:51:38 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:02.193 21:51:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.193 21:51:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.193 21:51:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.193 21:51:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.193 21:51:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.193 21:51:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.193 21:51:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.194 21:51:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.194 21:51:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.194 21:51:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.194 21:51:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.194 21:51:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.194 21:51:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.194 21:51:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.194 21:51:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.194 21:51:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.194 21:51:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.194 21:51:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.194 21:51:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.194 21:51:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.194 21:51:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.194 21:51:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.194 21:51:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.194 21:51:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.194 21:51:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.194 21:51:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.194 21:51:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.194 21:51:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.194 21:51:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.194 21:51:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.194 21:51:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.194 21:51:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.194 21:51:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.194 21:51:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:02.194 21:51:40 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:02.194 21:51:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.194 21:51:40 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:02.194 21:51:40 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:02.194 21:51:40 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:02.194 21:51:40 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:02.194 21:51:40 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:02.194 21:51:40 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:02.194 21:51:40 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:02.194 21:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:02.194 21:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:02.194 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:02.194 21:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:02.194 21:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:02.194 00:04:02.194 real 0m9.609s 00:04:02.194 user 0m2.182s 00:04:02.194 sys 0m4.474s 00:04:02.194 21:51:41 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:02.194 21:51:41 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:02.194 ************************************ 00:04:02.194 END TEST dm_mount 00:04:02.194 ************************************ 00:04:02.194 21:51:41 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:02.194 21:51:41 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:02.194 21:51:41 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:02.194 21:51:41 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:02.194 21:51:41 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:02.194 21:51:41 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:02.194 21:51:41 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:02.194 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:02.194 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:02.194 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:02.194 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:02.194 21:51:41 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:02.194 21:51:41 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:02.194 21:51:41 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:02.194 21:51:41 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:02.194 21:51:41 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:02.194 21:51:41 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:02.194 21:51:41 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:02.194 00:04:02.194 real 0m25.099s 00:04:02.194 user 0m6.461s 00:04:02.194 sys 0m13.326s 00:04:02.194 21:51:41 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:02.194 21:51:41 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:02.194 ************************************ 00:04:02.194 END TEST devices 00:04:02.194 ************************************ 00:04:02.452 00:04:02.452 real 1m28.220s 00:04:02.452 user 0m25.847s 00:04:02.452 sys 0m50.694s 00:04:02.452 21:51:41 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:02.452 21:51:41 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:02.452 ************************************ 00:04:02.452 END TEST setup.sh 00:04:02.452 ************************************ 00:04:02.452 21:51:41 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:05.748 Hugepages 00:04:05.748 node hugesize free / total 00:04:05.748 node0 1048576kB 0 / 0 00:04:05.748 node0 2048kB 2048 / 2048 00:04:05.748 node1 1048576kB 0 / 0 00:04:05.748 node1 2048kB 0 / 0 00:04:05.748 00:04:05.748 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:05.748 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:05.748 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:05.748 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:05.748 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:05.748 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:05.748 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:05.748 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:05.748 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:05.748 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:05.748 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:05.748 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:05.748 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:05.748 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:05.748 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:05.748 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:05.748 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:05.748 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:05.748 21:51:44 -- spdk/autotest.sh@130 -- # uname -s 00:04:05.748 21:51:44 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:05.748 21:51:44 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:05.748 21:51:44 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:09.036 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:09.036 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:09.036 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:09.036 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:09.036 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:09.036 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:09.326 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:09.326 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:09.326 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:09.326 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:09.326 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:09.326 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:09.326 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:09.326 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:09.326 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:09.326 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:11.232 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:11.232 21:51:50 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:12.169 21:51:51 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:12.169 21:51:51 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:12.169 21:51:51 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:12.169 21:51:51 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:12.169 21:51:51 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:12.169 21:51:51 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:12.169 21:51:51 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:12.169 21:51:51 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:12.169 21:51:51 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:12.169 21:51:51 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:12.169 21:51:51 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:04:12.169 21:51:51 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:15.459 Waiting for block devices as requested 00:04:15.459 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:15.459 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:15.459 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:15.459 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:15.459 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:15.459 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:15.459 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:15.459 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:15.718 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:15.718 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:15.718 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:15.978 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:15.978 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:15.978 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:16.237 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:16.237 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:16.237 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:04:16.497 21:51:55 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:16.497 21:51:55 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:04:16.497 21:51:55 -- common/autotest_common.sh@1502 -- # grep 0000:d8:00.0/nvme/nvme 00:04:16.497 21:51:55 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:16.497 21:51:55 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:16.497 21:51:55 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:04:16.497 21:51:55 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:16.497 21:51:55 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:16.497 21:51:55 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:16.497 21:51:55 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:16.497 21:51:55 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:16.497 21:51:55 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:16.497 21:51:55 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:16.497 21:51:55 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:04:16.497 21:51:55 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:16.497 21:51:55 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:16.497 21:51:55 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:16.497 21:51:55 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:16.497 21:51:55 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:16.497 21:51:55 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:16.497 21:51:55 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:16.497 21:51:55 -- common/autotest_common.sh@1557 -- # continue 00:04:16.497 21:51:55 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:16.497 21:51:55 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:16.497 21:51:55 -- common/autotest_common.sh@10 -- # set +x 00:04:16.497 21:51:55 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:16.497 21:51:55 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:16.497 21:51:55 -- common/autotest_common.sh@10 -- # set +x 00:04:16.497 21:51:55 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:19.034 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:19.034 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:19.034 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:19.034 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:19.034 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:19.034 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:19.294 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:19.294 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:19.294 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:19.294 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:19.294 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:19.294 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:19.294 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:19.294 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:19.294 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:19.294 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:21.203 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:21.203 21:52:00 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:21.203 21:52:00 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:21.203 21:52:00 -- common/autotest_common.sh@10 -- # set +x 00:04:21.203 21:52:00 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:21.203 21:52:00 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:21.203 21:52:00 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:21.203 21:52:00 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:21.203 21:52:00 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:21.203 21:52:00 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:21.203 21:52:00 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:21.203 21:52:00 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:21.203 21:52:00 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:21.203 21:52:00 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:21.203 21:52:00 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:21.203 21:52:00 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:21.203 21:52:00 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:04:21.203 21:52:00 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:21.203 21:52:00 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:04:21.203 21:52:00 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:21.203 21:52:00 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:21.203 21:52:00 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:21.203 21:52:00 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:d8:00.0 00:04:21.203 21:52:00 -- common/autotest_common.sh@1592 -- # [[ -z 0000:d8:00.0 ]] 00:04:21.203 21:52:00 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=2495618 00:04:21.203 21:52:00 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:21.203 21:52:00 -- common/autotest_common.sh@1598 -- # waitforlisten 2495618 00:04:21.203 21:52:00 -- common/autotest_common.sh@831 -- # '[' -z 2495618 ']' 00:04:21.203 21:52:00 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:21.203 21:52:00 -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:21.203 21:52:00 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:21.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:21.203 21:52:00 -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:21.203 21:52:00 -- common/autotest_common.sh@10 -- # set +x 00:04:21.203 [2024-07-24 21:52:00.265045] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:04:21.203 [2024-07-24 21:52:00.265100] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2495618 ] 00:04:21.203 EAL: No free 2048 kB hugepages reported on node 1 00:04:21.203 [2024-07-24 21:52:00.334745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.203 [2024-07-24 21:52:00.409066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.141 21:52:01 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:22.141 21:52:01 -- common/autotest_common.sh@864 -- # return 0 00:04:22.141 21:52:01 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:22.141 21:52:01 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:22.141 21:52:01 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:04:25.433 nvme0n1 00:04:25.433 21:52:04 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:25.433 [2024-07-24 21:52:04.231444] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:25.433 request: 00:04:25.433 { 00:04:25.433 "nvme_ctrlr_name": "nvme0", 00:04:25.433 "password": "test", 00:04:25.433 "method": "bdev_nvme_opal_revert", 00:04:25.433 "req_id": 1 00:04:25.433 } 00:04:25.433 Got JSON-RPC error response 00:04:25.433 response: 00:04:25.433 { 00:04:25.433 "code": -32602, 00:04:25.433 "message": "Invalid parameters" 00:04:25.433 } 00:04:25.433 21:52:04 -- common/autotest_common.sh@1604 -- # true 00:04:25.433 21:52:04 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:25.433 21:52:04 -- common/autotest_common.sh@1608 -- # killprocess 2495618 00:04:25.433 21:52:04 -- common/autotest_common.sh@950 -- # '[' -z 2495618 ']' 00:04:25.433 21:52:04 -- common/autotest_common.sh@954 -- # kill -0 2495618 00:04:25.433 21:52:04 -- common/autotest_common.sh@955 -- # uname 00:04:25.433 21:52:04 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:25.433 21:52:04 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2495618 00:04:25.433 21:52:04 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:25.433 21:52:04 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:25.433 21:52:04 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2495618' 00:04:25.433 killing process with pid 2495618 00:04:25.433 21:52:04 -- common/autotest_common.sh@969 -- # kill 2495618 00:04:25.433 21:52:04 -- common/autotest_common.sh@974 -- # wait 2495618 00:04:27.341 21:52:06 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:27.341 21:52:06 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:27.341 21:52:06 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:27.341 21:52:06 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:27.341 21:52:06 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:27.341 21:52:06 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:27.341 21:52:06 -- common/autotest_common.sh@10 -- # set +x 00:04:27.341 21:52:06 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:27.341 21:52:06 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:27.341 21:52:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:27.341 21:52:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:27.341 21:52:06 -- common/autotest_common.sh@10 -- # set +x 00:04:27.341 ************************************ 00:04:27.341 START TEST env 00:04:27.341 ************************************ 00:04:27.341 21:52:06 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:27.341 * Looking for test storage... 00:04:27.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:27.341 21:52:06 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:27.341 21:52:06 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:27.341 21:52:06 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:27.341 21:52:06 env -- common/autotest_common.sh@10 -- # set +x 00:04:27.600 ************************************ 00:04:27.600 START TEST env_memory 00:04:27.600 ************************************ 00:04:27.600 21:52:06 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:27.601 00:04:27.601 00:04:27.601 CUnit - A unit testing framework for C - Version 2.1-3 00:04:27.601 http://cunit.sourceforge.net/ 00:04:27.601 00:04:27.601 00:04:27.601 Suite: memory 00:04:27.601 Test: alloc and free memory map ...[2024-07-24 21:52:06.613109] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:27.601 passed 00:04:27.601 Test: mem map translation ...[2024-07-24 21:52:06.631894] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:27.601 [2024-07-24 21:52:06.631910] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:27.601 [2024-07-24 21:52:06.631947] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:27.601 [2024-07-24 21:52:06.631956] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:27.601 passed 00:04:27.601 Test: mem map registration ...[2024-07-24 21:52:06.668201] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:27.601 [2024-07-24 21:52:06.668217] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:27.601 passed 00:04:27.601 Test: mem map adjacent registrations ...passed 00:04:27.601 00:04:27.601 Run Summary: Type Total Ran Passed Failed Inactive 00:04:27.601 suites 1 1 n/a 0 0 00:04:27.601 tests 4 4 4 0 0 00:04:27.601 asserts 152 152 152 0 n/a 00:04:27.601 00:04:27.601 Elapsed time = 0.136 seconds 00:04:27.601 00:04:27.601 real 0m0.151s 00:04:27.601 user 0m0.139s 00:04:27.601 sys 0m0.012s 00:04:27.601 21:52:06 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:27.601 21:52:06 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:27.601 ************************************ 00:04:27.601 END TEST env_memory 00:04:27.601 ************************************ 00:04:27.601 21:52:06 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:27.601 21:52:06 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:27.601 21:52:06 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:27.601 21:52:06 env -- common/autotest_common.sh@10 -- # set +x 00:04:27.601 ************************************ 00:04:27.601 START TEST env_vtophys 00:04:27.601 ************************************ 00:04:27.601 21:52:06 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:27.861 EAL: lib.eal log level changed from notice to debug 00:04:27.861 EAL: Detected lcore 0 as core 0 on socket 0 00:04:27.861 EAL: Detected lcore 1 as core 1 on socket 0 00:04:27.861 EAL: Detected lcore 2 as core 2 on socket 0 00:04:27.861 EAL: Detected lcore 3 as core 3 on socket 0 00:04:27.861 EAL: Detected lcore 4 as core 4 on socket 0 00:04:27.861 EAL: Detected lcore 5 as core 5 on socket 0 00:04:27.861 EAL: Detected lcore 6 as core 6 on socket 0 00:04:27.861 EAL: Detected lcore 7 as core 8 on socket 0 00:04:27.861 EAL: Detected lcore 8 as core 9 on socket 0 00:04:27.861 EAL: Detected lcore 9 as core 10 on socket 0 00:04:27.861 EAL: Detected lcore 10 as core 11 on socket 0 00:04:27.861 EAL: Detected lcore 11 as core 12 on socket 0 00:04:27.861 EAL: Detected lcore 12 as core 13 on socket 0 00:04:27.861 EAL: Detected lcore 13 as core 14 on socket 0 00:04:27.861 EAL: Detected lcore 14 as core 16 on socket 0 00:04:27.861 EAL: Detected lcore 15 as core 17 on socket 0 00:04:27.861 EAL: Detected lcore 16 as core 18 on socket 0 00:04:27.861 EAL: Detected lcore 17 as core 19 on socket 0 00:04:27.861 EAL: Detected lcore 18 as core 20 on socket 0 00:04:27.861 EAL: Detected lcore 19 as core 21 on socket 0 00:04:27.861 EAL: Detected lcore 20 as core 22 on socket 0 00:04:27.861 EAL: Detected lcore 21 as core 24 on socket 0 00:04:27.861 EAL: Detected lcore 22 as core 25 on socket 0 00:04:27.861 EAL: Detected lcore 23 as core 26 on socket 0 00:04:27.861 EAL: Detected lcore 24 as core 27 on socket 0 00:04:27.861 EAL: Detected lcore 25 as core 28 on socket 0 00:04:27.861 EAL: Detected lcore 26 as core 29 on socket 0 00:04:27.861 EAL: Detected lcore 27 as core 30 on socket 0 00:04:27.861 EAL: Detected lcore 28 as core 0 on socket 1 00:04:27.861 EAL: Detected lcore 29 as core 1 on socket 1 00:04:27.861 EAL: Detected lcore 30 as core 2 on socket 1 00:04:27.861 EAL: Detected lcore 31 as core 3 on socket 1 00:04:27.861 EAL: Detected lcore 32 as core 4 on socket 1 00:04:27.861 EAL: Detected lcore 33 as core 5 on socket 1 00:04:27.861 EAL: Detected lcore 34 as core 6 on socket 1 00:04:27.861 EAL: Detected lcore 35 as core 8 on socket 1 00:04:27.861 EAL: Detected lcore 36 as core 9 on socket 1 00:04:27.861 EAL: Detected lcore 37 as core 10 on socket 1 00:04:27.861 EAL: Detected lcore 38 as core 11 on socket 1 00:04:27.861 EAL: Detected lcore 39 as core 12 on socket 1 00:04:27.861 EAL: Detected lcore 40 as core 13 on socket 1 00:04:27.861 EAL: Detected lcore 41 as core 14 on socket 1 00:04:27.861 EAL: Detected lcore 42 as core 16 on socket 1 00:04:27.861 EAL: Detected lcore 43 as core 17 on socket 1 00:04:27.861 EAL: Detected lcore 44 as core 18 on socket 1 00:04:27.861 EAL: Detected lcore 45 as core 19 on socket 1 00:04:27.861 EAL: Detected lcore 46 as core 20 on socket 1 00:04:27.861 EAL: Detected lcore 47 as core 21 on socket 1 00:04:27.861 EAL: Detected lcore 48 as core 22 on socket 1 00:04:27.861 EAL: Detected lcore 49 as core 24 on socket 1 00:04:27.861 EAL: Detected lcore 50 as core 25 on socket 1 00:04:27.861 EAL: Detected lcore 51 as core 26 on socket 1 00:04:27.861 EAL: Detected lcore 52 as core 27 on socket 1 00:04:27.861 EAL: Detected lcore 53 as core 28 on socket 1 00:04:27.861 EAL: Detected lcore 54 as core 29 on socket 1 00:04:27.861 EAL: Detected lcore 55 as core 30 on socket 1 00:04:27.861 EAL: Detected lcore 56 as core 0 on socket 0 00:04:27.861 EAL: Detected lcore 57 as core 1 on socket 0 00:04:27.861 EAL: Detected lcore 58 as core 2 on socket 0 00:04:27.861 EAL: Detected lcore 59 as core 3 on socket 0 00:04:27.861 EAL: Detected lcore 60 as core 4 on socket 0 00:04:27.861 EAL: Detected lcore 61 as core 5 on socket 0 00:04:27.861 EAL: Detected lcore 62 as core 6 on socket 0 00:04:27.861 EAL: Detected lcore 63 as core 8 on socket 0 00:04:27.861 EAL: Detected lcore 64 as core 9 on socket 0 00:04:27.861 EAL: Detected lcore 65 as core 10 on socket 0 00:04:27.861 EAL: Detected lcore 66 as core 11 on socket 0 00:04:27.861 EAL: Detected lcore 67 as core 12 on socket 0 00:04:27.861 EAL: Detected lcore 68 as core 13 on socket 0 00:04:27.861 EAL: Detected lcore 69 as core 14 on socket 0 00:04:27.861 EAL: Detected lcore 70 as core 16 on socket 0 00:04:27.861 EAL: Detected lcore 71 as core 17 on socket 0 00:04:27.861 EAL: Detected lcore 72 as core 18 on socket 0 00:04:27.861 EAL: Detected lcore 73 as core 19 on socket 0 00:04:27.861 EAL: Detected lcore 74 as core 20 on socket 0 00:04:27.861 EAL: Detected lcore 75 as core 21 on socket 0 00:04:27.861 EAL: Detected lcore 76 as core 22 on socket 0 00:04:27.861 EAL: Detected lcore 77 as core 24 on socket 0 00:04:27.861 EAL: Detected lcore 78 as core 25 on socket 0 00:04:27.861 EAL: Detected lcore 79 as core 26 on socket 0 00:04:27.861 EAL: Detected lcore 80 as core 27 on socket 0 00:04:27.861 EAL: Detected lcore 81 as core 28 on socket 0 00:04:27.861 EAL: Detected lcore 82 as core 29 on socket 0 00:04:27.861 EAL: Detected lcore 83 as core 30 on socket 0 00:04:27.861 EAL: Detected lcore 84 as core 0 on socket 1 00:04:27.861 EAL: Detected lcore 85 as core 1 on socket 1 00:04:27.861 EAL: Detected lcore 86 as core 2 on socket 1 00:04:27.861 EAL: Detected lcore 87 as core 3 on socket 1 00:04:27.861 EAL: Detected lcore 88 as core 4 on socket 1 00:04:27.861 EAL: Detected lcore 89 as core 5 on socket 1 00:04:27.861 EAL: Detected lcore 90 as core 6 on socket 1 00:04:27.861 EAL: Detected lcore 91 as core 8 on socket 1 00:04:27.861 EAL: Detected lcore 92 as core 9 on socket 1 00:04:27.861 EAL: Detected lcore 93 as core 10 on socket 1 00:04:27.861 EAL: Detected lcore 94 as core 11 on socket 1 00:04:27.862 EAL: Detected lcore 95 as core 12 on socket 1 00:04:27.862 EAL: Detected lcore 96 as core 13 on socket 1 00:04:27.862 EAL: Detected lcore 97 as core 14 on socket 1 00:04:27.862 EAL: Detected lcore 98 as core 16 on socket 1 00:04:27.862 EAL: Detected lcore 99 as core 17 on socket 1 00:04:27.862 EAL: Detected lcore 100 as core 18 on socket 1 00:04:27.862 EAL: Detected lcore 101 as core 19 on socket 1 00:04:27.862 EAL: Detected lcore 102 as core 20 on socket 1 00:04:27.862 EAL: Detected lcore 103 as core 21 on socket 1 00:04:27.862 EAL: Detected lcore 104 as core 22 on socket 1 00:04:27.862 EAL: Detected lcore 105 as core 24 on socket 1 00:04:27.862 EAL: Detected lcore 106 as core 25 on socket 1 00:04:27.862 EAL: Detected lcore 107 as core 26 on socket 1 00:04:27.862 EAL: Detected lcore 108 as core 27 on socket 1 00:04:27.862 EAL: Detected lcore 109 as core 28 on socket 1 00:04:27.862 EAL: Detected lcore 110 as core 29 on socket 1 00:04:27.862 EAL: Detected lcore 111 as core 30 on socket 1 00:04:27.862 EAL: Maximum logical cores by configuration: 128 00:04:27.862 EAL: Detected CPU lcores: 112 00:04:27.862 EAL: Detected NUMA nodes: 2 00:04:27.862 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:27.862 EAL: Detected shared linkage of DPDK 00:04:27.862 EAL: No shared files mode enabled, IPC will be disabled 00:04:27.862 EAL: Bus pci wants IOVA as 'DC' 00:04:27.862 EAL: Buses did not request a specific IOVA mode. 00:04:27.862 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:27.862 EAL: Selected IOVA mode 'VA' 00:04:27.862 EAL: No free 2048 kB hugepages reported on node 1 00:04:27.862 EAL: Probing VFIO support... 00:04:27.862 EAL: IOMMU type 1 (Type 1) is supported 00:04:27.862 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:27.862 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:27.862 EAL: VFIO support initialized 00:04:27.862 EAL: Ask a virtual area of 0x2e000 bytes 00:04:27.862 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:27.862 EAL: Setting up physically contiguous memory... 00:04:27.862 EAL: Setting maximum number of open files to 524288 00:04:27.862 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:27.862 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:27.862 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:27.862 EAL: Ask a virtual area of 0x61000 bytes 00:04:27.862 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:27.862 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:27.862 EAL: Ask a virtual area of 0x400000000 bytes 00:04:27.862 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:27.862 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:27.862 EAL: Ask a virtual area of 0x61000 bytes 00:04:27.862 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:27.862 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:27.862 EAL: Ask a virtual area of 0x400000000 bytes 00:04:27.862 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:27.862 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:27.862 EAL: Ask a virtual area of 0x61000 bytes 00:04:27.862 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:27.862 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:27.862 EAL: Ask a virtual area of 0x400000000 bytes 00:04:27.862 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:27.862 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:27.862 EAL: Ask a virtual area of 0x61000 bytes 00:04:27.862 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:27.862 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:27.862 EAL: Ask a virtual area of 0x400000000 bytes 00:04:27.862 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:27.862 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:27.862 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:27.862 EAL: Ask a virtual area of 0x61000 bytes 00:04:27.862 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:27.862 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:27.862 EAL: Ask a virtual area of 0x400000000 bytes 00:04:27.862 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:27.862 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:27.862 EAL: Ask a virtual area of 0x61000 bytes 00:04:27.862 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:27.862 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:27.862 EAL: Ask a virtual area of 0x400000000 bytes 00:04:27.862 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:27.862 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:27.862 EAL: Ask a virtual area of 0x61000 bytes 00:04:27.862 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:27.862 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:27.862 EAL: Ask a virtual area of 0x400000000 bytes 00:04:27.862 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:27.862 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:27.862 EAL: Ask a virtual area of 0x61000 bytes 00:04:27.862 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:27.862 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:27.862 EAL: Ask a virtual area of 0x400000000 bytes 00:04:27.862 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:27.862 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:27.862 EAL: Hugepages will be freed exactly as allocated. 00:04:27.862 EAL: No shared files mode enabled, IPC is disabled 00:04:27.862 EAL: No shared files mode enabled, IPC is disabled 00:04:27.862 EAL: TSC frequency is ~2500000 KHz 00:04:27.862 EAL: Main lcore 0 is ready (tid=7f0a24439a00;cpuset=[0]) 00:04:27.862 EAL: Trying to obtain current memory policy. 00:04:27.862 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.862 EAL: Restoring previous memory policy: 0 00:04:27.862 EAL: request: mp_malloc_sync 00:04:27.862 EAL: No shared files mode enabled, IPC is disabled 00:04:27.862 EAL: Heap on socket 0 was expanded by 2MB 00:04:27.862 EAL: No shared files mode enabled, IPC is disabled 00:04:27.862 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:27.862 EAL: Mem event callback 'spdk:(nil)' registered 00:04:27.862 00:04:27.862 00:04:27.862 CUnit - A unit testing framework for C - Version 2.1-3 00:04:27.862 http://cunit.sourceforge.net/ 00:04:27.862 00:04:27.862 00:04:27.862 Suite: components_suite 00:04:27.862 Test: vtophys_malloc_test ...passed 00:04:27.862 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:27.862 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.862 EAL: Restoring previous memory policy: 4 00:04:27.862 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.862 EAL: request: mp_malloc_sync 00:04:27.862 EAL: No shared files mode enabled, IPC is disabled 00:04:27.862 EAL: Heap on socket 0 was expanded by 4MB 00:04:27.862 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.862 EAL: request: mp_malloc_sync 00:04:27.862 EAL: No shared files mode enabled, IPC is disabled 00:04:27.862 EAL: Heap on socket 0 was shrunk by 4MB 00:04:27.862 EAL: Trying to obtain current memory policy. 00:04:27.862 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.862 EAL: Restoring previous memory policy: 4 00:04:27.862 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.862 EAL: request: mp_malloc_sync 00:04:27.862 EAL: No shared files mode enabled, IPC is disabled 00:04:27.862 EAL: Heap on socket 0 was expanded by 6MB 00:04:27.862 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.862 EAL: request: mp_malloc_sync 00:04:27.862 EAL: No shared files mode enabled, IPC is disabled 00:04:27.862 EAL: Heap on socket 0 was shrunk by 6MB 00:04:27.862 EAL: Trying to obtain current memory policy. 00:04:27.862 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.862 EAL: Restoring previous memory policy: 4 00:04:27.862 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.862 EAL: request: mp_malloc_sync 00:04:27.862 EAL: No shared files mode enabled, IPC is disabled 00:04:27.862 EAL: Heap on socket 0 was expanded by 10MB 00:04:27.862 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.862 EAL: request: mp_malloc_sync 00:04:27.862 EAL: No shared files mode enabled, IPC is disabled 00:04:27.862 EAL: Heap on socket 0 was shrunk by 10MB 00:04:27.862 EAL: Trying to obtain current memory policy. 00:04:27.862 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.862 EAL: Restoring previous memory policy: 4 00:04:27.862 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.862 EAL: request: mp_malloc_sync 00:04:27.862 EAL: No shared files mode enabled, IPC is disabled 00:04:27.862 EAL: Heap on socket 0 was expanded by 18MB 00:04:27.862 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.862 EAL: request: mp_malloc_sync 00:04:27.862 EAL: No shared files mode enabled, IPC is disabled 00:04:27.862 EAL: Heap on socket 0 was shrunk by 18MB 00:04:27.862 EAL: Trying to obtain current memory policy. 00:04:27.862 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.862 EAL: Restoring previous memory policy: 4 00:04:27.862 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.862 EAL: request: mp_malloc_sync 00:04:27.862 EAL: No shared files mode enabled, IPC is disabled 00:04:27.862 EAL: Heap on socket 0 was expanded by 34MB 00:04:27.862 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.862 EAL: request: mp_malloc_sync 00:04:27.862 EAL: No shared files mode enabled, IPC is disabled 00:04:27.862 EAL: Heap on socket 0 was shrunk by 34MB 00:04:27.862 EAL: Trying to obtain current memory policy. 00:04:27.862 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.862 EAL: Restoring previous memory policy: 4 00:04:27.862 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.862 EAL: request: mp_malloc_sync 00:04:27.862 EAL: No shared files mode enabled, IPC is disabled 00:04:27.862 EAL: Heap on socket 0 was expanded by 66MB 00:04:27.862 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.862 EAL: request: mp_malloc_sync 00:04:27.862 EAL: No shared files mode enabled, IPC is disabled 00:04:27.862 EAL: Heap on socket 0 was shrunk by 66MB 00:04:27.862 EAL: Trying to obtain current memory policy. 00:04:27.862 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.862 EAL: Restoring previous memory policy: 4 00:04:27.862 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.862 EAL: request: mp_malloc_sync 00:04:27.862 EAL: No shared files mode enabled, IPC is disabled 00:04:27.862 EAL: Heap on socket 0 was expanded by 130MB 00:04:27.863 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.863 EAL: request: mp_malloc_sync 00:04:27.863 EAL: No shared files mode enabled, IPC is disabled 00:04:27.863 EAL: Heap on socket 0 was shrunk by 130MB 00:04:27.863 EAL: Trying to obtain current memory policy. 00:04:27.863 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.863 EAL: Restoring previous memory policy: 4 00:04:27.863 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.863 EAL: request: mp_malloc_sync 00:04:27.863 EAL: No shared files mode enabled, IPC is disabled 00:04:27.863 EAL: Heap on socket 0 was expanded by 258MB 00:04:28.123 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.123 EAL: request: mp_malloc_sync 00:04:28.123 EAL: No shared files mode enabled, IPC is disabled 00:04:28.123 EAL: Heap on socket 0 was shrunk by 258MB 00:04:28.123 EAL: Trying to obtain current memory policy. 00:04:28.123 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.123 EAL: Restoring previous memory policy: 4 00:04:28.123 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.123 EAL: request: mp_malloc_sync 00:04:28.123 EAL: No shared files mode enabled, IPC is disabled 00:04:28.123 EAL: Heap on socket 0 was expanded by 514MB 00:04:28.123 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.428 EAL: request: mp_malloc_sync 00:04:28.428 EAL: No shared files mode enabled, IPC is disabled 00:04:28.428 EAL: Heap on socket 0 was shrunk by 514MB 00:04:28.428 EAL: Trying to obtain current memory policy. 00:04:28.428 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.428 EAL: Restoring previous memory policy: 4 00:04:28.428 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.428 EAL: request: mp_malloc_sync 00:04:28.428 EAL: No shared files mode enabled, IPC is disabled 00:04:28.428 EAL: Heap on socket 0 was expanded by 1026MB 00:04:28.697 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.697 EAL: request: mp_malloc_sync 00:04:28.697 EAL: No shared files mode enabled, IPC is disabled 00:04:28.697 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:28.697 passed 00:04:28.697 00:04:28.697 Run Summary: Type Total Ran Passed Failed Inactive 00:04:28.697 suites 1 1 n/a 0 0 00:04:28.697 tests 2 2 2 0 0 00:04:28.697 asserts 497 497 497 0 n/a 00:04:28.697 00:04:28.697 Elapsed time = 0.959 seconds 00:04:28.697 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.697 EAL: request: mp_malloc_sync 00:04:28.697 EAL: No shared files mode enabled, IPC is disabled 00:04:28.697 EAL: Heap on socket 0 was shrunk by 2MB 00:04:28.697 EAL: No shared files mode enabled, IPC is disabled 00:04:28.697 EAL: No shared files mode enabled, IPC is disabled 00:04:28.697 EAL: No shared files mode enabled, IPC is disabled 00:04:28.697 00:04:28.697 real 0m1.084s 00:04:28.697 user 0m0.627s 00:04:28.697 sys 0m0.432s 00:04:28.697 21:52:07 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:28.697 21:52:07 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:28.697 ************************************ 00:04:28.697 END TEST env_vtophys 00:04:28.697 ************************************ 00:04:28.956 21:52:07 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:28.956 21:52:07 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:28.956 21:52:07 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:28.956 21:52:07 env -- common/autotest_common.sh@10 -- # set +x 00:04:28.956 ************************************ 00:04:28.956 START TEST env_pci 00:04:28.956 ************************************ 00:04:28.956 21:52:07 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:28.956 00:04:28.956 00:04:28.956 CUnit - A unit testing framework for C - Version 2.1-3 00:04:28.956 http://cunit.sourceforge.net/ 00:04:28.956 00:04:28.956 00:04:28.956 Suite: pci 00:04:28.956 Test: pci_hook ...[2024-07-24 21:52:07.981539] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2497095 has claimed it 00:04:28.956 EAL: Cannot find device (10000:00:01.0) 00:04:28.956 EAL: Failed to attach device on primary process 00:04:28.956 passed 00:04:28.956 00:04:28.957 Run Summary: Type Total Ran Passed Failed Inactive 00:04:28.957 suites 1 1 n/a 0 0 00:04:28.957 tests 1 1 1 0 0 00:04:28.957 asserts 25 25 25 0 n/a 00:04:28.957 00:04:28.957 Elapsed time = 0.034 seconds 00:04:28.957 00:04:28.957 real 0m0.056s 00:04:28.957 user 0m0.016s 00:04:28.957 sys 0m0.040s 00:04:28.957 21:52:08 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:28.957 21:52:08 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:28.957 ************************************ 00:04:28.957 END TEST env_pci 00:04:28.957 ************************************ 00:04:28.957 21:52:08 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:28.957 21:52:08 env -- env/env.sh@15 -- # uname 00:04:28.957 21:52:08 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:28.957 21:52:08 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:28.957 21:52:08 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:28.957 21:52:08 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:28.957 21:52:08 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:28.957 21:52:08 env -- common/autotest_common.sh@10 -- # set +x 00:04:28.957 ************************************ 00:04:28.957 START TEST env_dpdk_post_init 00:04:28.957 ************************************ 00:04:28.957 21:52:08 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:28.957 EAL: Detected CPU lcores: 112 00:04:28.957 EAL: Detected NUMA nodes: 2 00:04:28.957 EAL: Detected shared linkage of DPDK 00:04:28.957 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:28.957 EAL: Selected IOVA mode 'VA' 00:04:28.957 EAL: No free 2048 kB hugepages reported on node 1 00:04:28.957 EAL: VFIO support initialized 00:04:29.217 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:29.217 EAL: Using IOMMU type 1 (Type 1) 00:04:29.217 EAL: Ignore mapping IO port bar(1) 00:04:29.217 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:29.217 EAL: Ignore mapping IO port bar(1) 00:04:29.217 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:29.217 EAL: Ignore mapping IO port bar(1) 00:04:29.217 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:29.217 EAL: Ignore mapping IO port bar(1) 00:04:29.217 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:29.217 EAL: Ignore mapping IO port bar(1) 00:04:29.217 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:29.217 EAL: Ignore mapping IO port bar(1) 00:04:29.217 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:29.217 EAL: Ignore mapping IO port bar(1) 00:04:29.217 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:29.217 EAL: Ignore mapping IO port bar(1) 00:04:29.217 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:29.217 EAL: Ignore mapping IO port bar(1) 00:04:29.217 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:29.217 EAL: Ignore mapping IO port bar(1) 00:04:29.217 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:29.217 EAL: Ignore mapping IO port bar(1) 00:04:29.217 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:29.217 EAL: Ignore mapping IO port bar(1) 00:04:29.217 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:29.217 EAL: Ignore mapping IO port bar(1) 00:04:29.217 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:29.217 EAL: Ignore mapping IO port bar(1) 00:04:29.217 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:29.217 EAL: Ignore mapping IO port bar(1) 00:04:29.217 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:29.477 EAL: Ignore mapping IO port bar(1) 00:04:29.477 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:30.046 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:04:34.241 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:04:34.241 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:04:34.241 Starting DPDK initialization... 00:04:34.241 Starting SPDK post initialization... 00:04:34.241 SPDK NVMe probe 00:04:34.241 Attaching to 0000:d8:00.0 00:04:34.241 Attached to 0000:d8:00.0 00:04:34.241 Cleaning up... 00:04:34.241 00:04:34.241 real 0m4.971s 00:04:34.241 user 0m3.679s 00:04:34.241 sys 0m0.346s 00:04:34.241 21:52:13 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:34.241 21:52:13 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:34.241 ************************************ 00:04:34.241 END TEST env_dpdk_post_init 00:04:34.241 ************************************ 00:04:34.241 21:52:13 env -- env/env.sh@26 -- # uname 00:04:34.241 21:52:13 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:34.241 21:52:13 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:34.241 21:52:13 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:34.241 21:52:13 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:34.241 21:52:13 env -- common/autotest_common.sh@10 -- # set +x 00:04:34.241 ************************************ 00:04:34.241 START TEST env_mem_callbacks 00:04:34.241 ************************************ 00:04:34.241 21:52:13 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:34.241 EAL: Detected CPU lcores: 112 00:04:34.241 EAL: Detected NUMA nodes: 2 00:04:34.241 EAL: Detected shared linkage of DPDK 00:04:34.241 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:34.241 EAL: Selected IOVA mode 'VA' 00:04:34.241 EAL: No free 2048 kB hugepages reported on node 1 00:04:34.241 EAL: VFIO support initialized 00:04:34.241 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:34.241 00:04:34.241 00:04:34.241 CUnit - A unit testing framework for C - Version 2.1-3 00:04:34.241 http://cunit.sourceforge.net/ 00:04:34.241 00:04:34.241 00:04:34.241 Suite: memory 00:04:34.241 Test: test ... 00:04:34.241 register 0x200000200000 2097152 00:04:34.241 malloc 3145728 00:04:34.241 register 0x200000400000 4194304 00:04:34.241 buf 0x200000500000 len 3145728 PASSED 00:04:34.241 malloc 64 00:04:34.241 buf 0x2000004fff40 len 64 PASSED 00:04:34.241 malloc 4194304 00:04:34.241 register 0x200000800000 6291456 00:04:34.241 buf 0x200000a00000 len 4194304 PASSED 00:04:34.241 free 0x200000500000 3145728 00:04:34.241 free 0x2000004fff40 64 00:04:34.241 unregister 0x200000400000 4194304 PASSED 00:04:34.241 free 0x200000a00000 4194304 00:04:34.241 unregister 0x200000800000 6291456 PASSED 00:04:34.241 malloc 8388608 00:04:34.241 register 0x200000400000 10485760 00:04:34.241 buf 0x200000600000 len 8388608 PASSED 00:04:34.241 free 0x200000600000 8388608 00:04:34.242 unregister 0x200000400000 10485760 PASSED 00:04:34.242 passed 00:04:34.242 00:04:34.242 Run Summary: Type Total Ran Passed Failed Inactive 00:04:34.242 suites 1 1 n/a 0 0 00:04:34.242 tests 1 1 1 0 0 00:04:34.242 asserts 15 15 15 0 n/a 00:04:34.242 00:04:34.242 Elapsed time = 0.006 seconds 00:04:34.242 00:04:34.242 real 0m0.067s 00:04:34.242 user 0m0.019s 00:04:34.242 sys 0m0.048s 00:04:34.242 21:52:13 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:34.242 21:52:13 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:34.242 ************************************ 00:04:34.242 END TEST env_mem_callbacks 00:04:34.242 ************************************ 00:04:34.242 00:04:34.242 real 0m6.852s 00:04:34.242 user 0m4.673s 00:04:34.242 sys 0m1.251s 00:04:34.242 21:52:13 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:34.242 21:52:13 env -- common/autotest_common.sh@10 -- # set +x 00:04:34.242 ************************************ 00:04:34.242 END TEST env 00:04:34.242 ************************************ 00:04:34.242 21:52:13 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:34.242 21:52:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:34.242 21:52:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:34.242 21:52:13 -- common/autotest_common.sh@10 -- # set +x 00:04:34.242 ************************************ 00:04:34.242 START TEST rpc 00:04:34.242 ************************************ 00:04:34.242 21:52:13 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:34.501 * Looking for test storage... 00:04:34.501 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:34.501 21:52:13 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2498076 00:04:34.501 21:52:13 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:34.501 21:52:13 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2498076 00:04:34.501 21:52:13 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:34.501 21:52:13 rpc -- common/autotest_common.sh@831 -- # '[' -z 2498076 ']' 00:04:34.501 21:52:13 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.501 21:52:13 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:34.501 21:52:13 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.501 21:52:13 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:34.501 21:52:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.501 [2024-07-24 21:52:13.508308] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:04:34.501 [2024-07-24 21:52:13.508357] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2498076 ] 00:04:34.501 EAL: No free 2048 kB hugepages reported on node 1 00:04:34.501 [2024-07-24 21:52:13.577315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.501 [2024-07-24 21:52:13.652941] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:34.501 [2024-07-24 21:52:13.652977] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2498076' to capture a snapshot of events at runtime. 00:04:34.501 [2024-07-24 21:52:13.652987] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:34.501 [2024-07-24 21:52:13.652995] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:34.501 [2024-07-24 21:52:13.653002] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2498076 for offline analysis/debug. 00:04:34.501 [2024-07-24 21:52:13.653025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.437 21:52:14 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:35.437 21:52:14 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:35.437 21:52:14 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:35.437 21:52:14 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:35.437 21:52:14 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:35.437 21:52:14 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:35.437 21:52:14 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.437 21:52:14 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.437 21:52:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.438 ************************************ 00:04:35.438 START TEST rpc_integrity 00:04:35.438 ************************************ 00:04:35.438 21:52:14 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:35.438 21:52:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:35.438 21:52:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.438 21:52:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.438 21:52:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.438 21:52:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:35.438 21:52:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:35.438 21:52:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:35.438 21:52:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:35.438 21:52:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.438 21:52:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.438 21:52:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.438 21:52:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:35.438 21:52:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:35.438 21:52:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.438 21:52:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.438 21:52:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.438 21:52:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:35.438 { 00:04:35.438 "name": "Malloc0", 00:04:35.438 "aliases": [ 00:04:35.438 "6721d0c1-151f-4789-98ea-b924d9457ca0" 00:04:35.438 ], 00:04:35.438 "product_name": "Malloc disk", 00:04:35.438 "block_size": 512, 00:04:35.438 "num_blocks": 16384, 00:04:35.438 "uuid": "6721d0c1-151f-4789-98ea-b924d9457ca0", 00:04:35.438 "assigned_rate_limits": { 00:04:35.438 "rw_ios_per_sec": 0, 00:04:35.438 "rw_mbytes_per_sec": 0, 00:04:35.438 "r_mbytes_per_sec": 0, 00:04:35.438 "w_mbytes_per_sec": 0 00:04:35.438 }, 00:04:35.438 "claimed": false, 00:04:35.438 "zoned": false, 00:04:35.438 "supported_io_types": { 00:04:35.438 "read": true, 00:04:35.438 "write": true, 00:04:35.438 "unmap": true, 00:04:35.438 "flush": true, 00:04:35.438 "reset": true, 00:04:35.438 "nvme_admin": false, 00:04:35.438 "nvme_io": false, 00:04:35.438 "nvme_io_md": false, 00:04:35.438 "write_zeroes": true, 00:04:35.438 "zcopy": true, 00:04:35.438 "get_zone_info": false, 00:04:35.438 "zone_management": false, 00:04:35.438 "zone_append": false, 00:04:35.438 "compare": false, 00:04:35.438 "compare_and_write": false, 00:04:35.438 "abort": true, 00:04:35.438 "seek_hole": false, 00:04:35.438 "seek_data": false, 00:04:35.438 "copy": true, 00:04:35.438 "nvme_iov_md": false 00:04:35.438 }, 00:04:35.438 "memory_domains": [ 00:04:35.438 { 00:04:35.438 "dma_device_id": "system", 00:04:35.438 "dma_device_type": 1 00:04:35.438 }, 00:04:35.438 { 00:04:35.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.438 "dma_device_type": 2 00:04:35.438 } 00:04:35.438 ], 00:04:35.438 "driver_specific": {} 00:04:35.438 } 00:04:35.438 ]' 00:04:35.438 21:52:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:35.438 21:52:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:35.438 21:52:14 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:35.438 21:52:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.438 21:52:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.438 [2024-07-24 21:52:14.456010] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:35.438 [2024-07-24 21:52:14.456039] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:35.438 [2024-07-24 21:52:14.456053] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2379440 00:04:35.438 [2024-07-24 21:52:14.456062] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:35.438 [2024-07-24 21:52:14.457083] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:35.438 [2024-07-24 21:52:14.457106] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:35.438 Passthru0 00:04:35.438 21:52:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.438 21:52:14 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:35.438 21:52:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.438 21:52:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.438 21:52:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.438 21:52:14 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:35.438 { 00:04:35.438 "name": "Malloc0", 00:04:35.438 "aliases": [ 00:04:35.438 "6721d0c1-151f-4789-98ea-b924d9457ca0" 00:04:35.438 ], 00:04:35.438 "product_name": "Malloc disk", 00:04:35.438 "block_size": 512, 00:04:35.438 "num_blocks": 16384, 00:04:35.438 "uuid": "6721d0c1-151f-4789-98ea-b924d9457ca0", 00:04:35.438 "assigned_rate_limits": { 00:04:35.438 "rw_ios_per_sec": 0, 00:04:35.438 "rw_mbytes_per_sec": 0, 00:04:35.438 "r_mbytes_per_sec": 0, 00:04:35.438 "w_mbytes_per_sec": 0 00:04:35.438 }, 00:04:35.438 "claimed": true, 00:04:35.438 "claim_type": "exclusive_write", 00:04:35.438 "zoned": false, 00:04:35.438 "supported_io_types": { 00:04:35.438 "read": true, 00:04:35.438 "write": true, 00:04:35.438 "unmap": true, 00:04:35.438 "flush": true, 00:04:35.438 "reset": true, 00:04:35.438 "nvme_admin": false, 00:04:35.438 "nvme_io": false, 00:04:35.438 "nvme_io_md": false, 00:04:35.438 "write_zeroes": true, 00:04:35.438 "zcopy": true, 00:04:35.438 "get_zone_info": false, 00:04:35.438 "zone_management": false, 00:04:35.438 "zone_append": false, 00:04:35.438 "compare": false, 00:04:35.438 "compare_and_write": false, 00:04:35.438 "abort": true, 00:04:35.438 "seek_hole": false, 00:04:35.438 "seek_data": false, 00:04:35.438 "copy": true, 00:04:35.438 "nvme_iov_md": false 00:04:35.438 }, 00:04:35.438 "memory_domains": [ 00:04:35.438 { 00:04:35.438 "dma_device_id": "system", 00:04:35.438 "dma_device_type": 1 00:04:35.438 }, 00:04:35.438 { 00:04:35.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.438 "dma_device_type": 2 00:04:35.438 } 00:04:35.438 ], 00:04:35.438 "driver_specific": {} 00:04:35.438 }, 00:04:35.438 { 00:04:35.438 "name": "Passthru0", 00:04:35.438 "aliases": [ 00:04:35.438 "53d56a34-8cba-5e52-90c1-3b28d61ea342" 00:04:35.438 ], 00:04:35.438 "product_name": "passthru", 00:04:35.438 "block_size": 512, 00:04:35.438 "num_blocks": 16384, 00:04:35.438 "uuid": "53d56a34-8cba-5e52-90c1-3b28d61ea342", 00:04:35.438 "assigned_rate_limits": { 00:04:35.438 "rw_ios_per_sec": 0, 00:04:35.438 "rw_mbytes_per_sec": 0, 00:04:35.438 "r_mbytes_per_sec": 0, 00:04:35.438 "w_mbytes_per_sec": 0 00:04:35.438 }, 00:04:35.438 "claimed": false, 00:04:35.438 "zoned": false, 00:04:35.438 "supported_io_types": { 00:04:35.438 "read": true, 00:04:35.438 "write": true, 00:04:35.438 "unmap": true, 00:04:35.438 "flush": true, 00:04:35.438 "reset": true, 00:04:35.438 "nvme_admin": false, 00:04:35.438 "nvme_io": false, 00:04:35.438 "nvme_io_md": false, 00:04:35.438 "write_zeroes": true, 00:04:35.438 "zcopy": true, 00:04:35.438 "get_zone_info": false, 00:04:35.438 "zone_management": false, 00:04:35.438 "zone_append": false, 00:04:35.438 "compare": false, 00:04:35.438 "compare_and_write": false, 00:04:35.438 "abort": true, 00:04:35.438 "seek_hole": false, 00:04:35.438 "seek_data": false, 00:04:35.438 "copy": true, 00:04:35.438 "nvme_iov_md": false 00:04:35.438 }, 00:04:35.438 "memory_domains": [ 00:04:35.438 { 00:04:35.438 "dma_device_id": "system", 00:04:35.438 "dma_device_type": 1 00:04:35.438 }, 00:04:35.438 { 00:04:35.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.438 "dma_device_type": 2 00:04:35.438 } 00:04:35.438 ], 00:04:35.438 "driver_specific": { 00:04:35.438 "passthru": { 00:04:35.438 "name": "Passthru0", 00:04:35.438 "base_bdev_name": "Malloc0" 00:04:35.438 } 00:04:35.438 } 00:04:35.438 } 00:04:35.438 ]' 00:04:35.438 21:52:14 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:35.438 21:52:14 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:35.438 21:52:14 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:35.438 21:52:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.438 21:52:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.438 21:52:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.438 21:52:14 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:35.439 21:52:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.439 21:52:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.439 21:52:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.439 21:52:14 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:35.439 21:52:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.439 21:52:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.439 21:52:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.439 21:52:14 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:35.439 21:52:14 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:35.439 21:52:14 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:35.439 00:04:35.439 real 0m0.285s 00:04:35.439 user 0m0.167s 00:04:35.439 sys 0m0.050s 00:04:35.439 21:52:14 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:35.439 21:52:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.439 ************************************ 00:04:35.439 END TEST rpc_integrity 00:04:35.439 ************************************ 00:04:35.439 21:52:14 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:35.439 21:52:14 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.439 21:52:14 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.698 21:52:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.698 ************************************ 00:04:35.698 START TEST rpc_plugins 00:04:35.698 ************************************ 00:04:35.698 21:52:14 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:35.698 21:52:14 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:35.698 21:52:14 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.698 21:52:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:35.698 21:52:14 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.698 21:52:14 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:35.698 21:52:14 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:35.698 21:52:14 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.698 21:52:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:35.698 21:52:14 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.698 21:52:14 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:35.698 { 00:04:35.698 "name": "Malloc1", 00:04:35.698 "aliases": [ 00:04:35.698 "612d891a-1f63-4af8-b272-e684b1b157d3" 00:04:35.698 ], 00:04:35.698 "product_name": "Malloc disk", 00:04:35.698 "block_size": 4096, 00:04:35.698 "num_blocks": 256, 00:04:35.698 "uuid": "612d891a-1f63-4af8-b272-e684b1b157d3", 00:04:35.698 "assigned_rate_limits": { 00:04:35.698 "rw_ios_per_sec": 0, 00:04:35.698 "rw_mbytes_per_sec": 0, 00:04:35.698 "r_mbytes_per_sec": 0, 00:04:35.698 "w_mbytes_per_sec": 0 00:04:35.698 }, 00:04:35.698 "claimed": false, 00:04:35.698 "zoned": false, 00:04:35.698 "supported_io_types": { 00:04:35.698 "read": true, 00:04:35.698 "write": true, 00:04:35.698 "unmap": true, 00:04:35.698 "flush": true, 00:04:35.698 "reset": true, 00:04:35.698 "nvme_admin": false, 00:04:35.698 "nvme_io": false, 00:04:35.698 "nvme_io_md": false, 00:04:35.698 "write_zeroes": true, 00:04:35.698 "zcopy": true, 00:04:35.698 "get_zone_info": false, 00:04:35.698 "zone_management": false, 00:04:35.698 "zone_append": false, 00:04:35.698 "compare": false, 00:04:35.698 "compare_and_write": false, 00:04:35.698 "abort": true, 00:04:35.698 "seek_hole": false, 00:04:35.698 "seek_data": false, 00:04:35.698 "copy": true, 00:04:35.698 "nvme_iov_md": false 00:04:35.698 }, 00:04:35.698 "memory_domains": [ 00:04:35.698 { 00:04:35.698 "dma_device_id": "system", 00:04:35.698 "dma_device_type": 1 00:04:35.698 }, 00:04:35.698 { 00:04:35.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.698 "dma_device_type": 2 00:04:35.698 } 00:04:35.698 ], 00:04:35.698 "driver_specific": {} 00:04:35.698 } 00:04:35.698 ]' 00:04:35.698 21:52:14 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:35.698 21:52:14 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:35.698 21:52:14 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:35.698 21:52:14 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.698 21:52:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:35.698 21:52:14 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.698 21:52:14 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:35.698 21:52:14 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.698 21:52:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:35.698 21:52:14 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.698 21:52:14 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:35.698 21:52:14 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:35.698 21:52:14 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:35.698 00:04:35.698 real 0m0.142s 00:04:35.698 user 0m0.082s 00:04:35.698 sys 0m0.026s 00:04:35.698 21:52:14 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:35.698 21:52:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:35.698 ************************************ 00:04:35.698 END TEST rpc_plugins 00:04:35.698 ************************************ 00:04:35.698 21:52:14 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:35.698 21:52:14 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.698 21:52:14 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.698 21:52:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.957 ************************************ 00:04:35.957 START TEST rpc_trace_cmd_test 00:04:35.957 ************************************ 00:04:35.957 21:52:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:35.957 21:52:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:35.957 21:52:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:35.957 21:52:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.957 21:52:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:35.957 21:52:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.957 21:52:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:35.957 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2498076", 00:04:35.957 "tpoint_group_mask": "0x8", 00:04:35.957 "iscsi_conn": { 00:04:35.957 "mask": "0x2", 00:04:35.957 "tpoint_mask": "0x0" 00:04:35.957 }, 00:04:35.957 "scsi": { 00:04:35.957 "mask": "0x4", 00:04:35.957 "tpoint_mask": "0x0" 00:04:35.957 }, 00:04:35.957 "bdev": { 00:04:35.957 "mask": "0x8", 00:04:35.957 "tpoint_mask": "0xffffffffffffffff" 00:04:35.957 }, 00:04:35.957 "nvmf_rdma": { 00:04:35.957 "mask": "0x10", 00:04:35.957 "tpoint_mask": "0x0" 00:04:35.957 }, 00:04:35.957 "nvmf_tcp": { 00:04:35.957 "mask": "0x20", 00:04:35.957 "tpoint_mask": "0x0" 00:04:35.957 }, 00:04:35.957 "ftl": { 00:04:35.957 "mask": "0x40", 00:04:35.957 "tpoint_mask": "0x0" 00:04:35.957 }, 00:04:35.957 "blobfs": { 00:04:35.957 "mask": "0x80", 00:04:35.957 "tpoint_mask": "0x0" 00:04:35.957 }, 00:04:35.957 "dsa": { 00:04:35.957 "mask": "0x200", 00:04:35.957 "tpoint_mask": "0x0" 00:04:35.957 }, 00:04:35.957 "thread": { 00:04:35.957 "mask": "0x400", 00:04:35.957 "tpoint_mask": "0x0" 00:04:35.957 }, 00:04:35.957 "nvme_pcie": { 00:04:35.957 "mask": "0x800", 00:04:35.957 "tpoint_mask": "0x0" 00:04:35.957 }, 00:04:35.957 "iaa": { 00:04:35.957 "mask": "0x1000", 00:04:35.957 "tpoint_mask": "0x0" 00:04:35.957 }, 00:04:35.957 "nvme_tcp": { 00:04:35.958 "mask": "0x2000", 00:04:35.958 "tpoint_mask": "0x0" 00:04:35.958 }, 00:04:35.958 "bdev_nvme": { 00:04:35.958 "mask": "0x4000", 00:04:35.958 "tpoint_mask": "0x0" 00:04:35.958 }, 00:04:35.958 "sock": { 00:04:35.958 "mask": "0x8000", 00:04:35.958 "tpoint_mask": "0x0" 00:04:35.958 } 00:04:35.958 }' 00:04:35.958 21:52:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:35.958 21:52:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:35.958 21:52:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:35.958 21:52:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:35.958 21:52:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:35.958 21:52:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:35.958 21:52:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:35.958 21:52:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:35.958 21:52:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:35.958 21:52:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:35.958 00:04:35.958 real 0m0.223s 00:04:35.958 user 0m0.183s 00:04:35.958 sys 0m0.032s 00:04:35.958 21:52:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:35.958 21:52:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:35.958 ************************************ 00:04:35.958 END TEST rpc_trace_cmd_test 00:04:35.958 ************************************ 00:04:36.217 21:52:15 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:36.217 21:52:15 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:36.217 21:52:15 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:36.217 21:52:15 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:36.217 21:52:15 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.217 21:52:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.217 ************************************ 00:04:36.217 START TEST rpc_daemon_integrity 00:04:36.217 ************************************ 00:04:36.217 21:52:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:36.217 21:52:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:36.217 21:52:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.217 21:52:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.217 21:52:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.217 21:52:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:36.217 21:52:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:36.217 21:52:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:36.217 21:52:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:36.217 21:52:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.217 21:52:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.217 21:52:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.217 21:52:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:36.217 21:52:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:36.217 21:52:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.218 21:52:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.218 21:52:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.218 21:52:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:36.218 { 00:04:36.218 "name": "Malloc2", 00:04:36.218 "aliases": [ 00:04:36.218 "de5238e7-c6b3-422c-916d-548b781794be" 00:04:36.218 ], 00:04:36.218 "product_name": "Malloc disk", 00:04:36.218 "block_size": 512, 00:04:36.218 "num_blocks": 16384, 00:04:36.218 "uuid": "de5238e7-c6b3-422c-916d-548b781794be", 00:04:36.218 "assigned_rate_limits": { 00:04:36.218 "rw_ios_per_sec": 0, 00:04:36.218 "rw_mbytes_per_sec": 0, 00:04:36.218 "r_mbytes_per_sec": 0, 00:04:36.218 "w_mbytes_per_sec": 0 00:04:36.218 }, 00:04:36.218 "claimed": false, 00:04:36.218 "zoned": false, 00:04:36.218 "supported_io_types": { 00:04:36.218 "read": true, 00:04:36.218 "write": true, 00:04:36.218 "unmap": true, 00:04:36.218 "flush": true, 00:04:36.218 "reset": true, 00:04:36.218 "nvme_admin": false, 00:04:36.218 "nvme_io": false, 00:04:36.218 "nvme_io_md": false, 00:04:36.218 "write_zeroes": true, 00:04:36.218 "zcopy": true, 00:04:36.218 "get_zone_info": false, 00:04:36.218 "zone_management": false, 00:04:36.218 "zone_append": false, 00:04:36.218 "compare": false, 00:04:36.218 "compare_and_write": false, 00:04:36.218 "abort": true, 00:04:36.218 "seek_hole": false, 00:04:36.218 "seek_data": false, 00:04:36.218 "copy": true, 00:04:36.218 "nvme_iov_md": false 00:04:36.218 }, 00:04:36.218 "memory_domains": [ 00:04:36.218 { 00:04:36.218 "dma_device_id": "system", 00:04:36.218 "dma_device_type": 1 00:04:36.218 }, 00:04:36.218 { 00:04:36.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:36.218 "dma_device_type": 2 00:04:36.218 } 00:04:36.218 ], 00:04:36.218 "driver_specific": {} 00:04:36.218 } 00:04:36.218 ]' 00:04:36.218 21:52:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:36.218 21:52:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:36.218 21:52:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:36.218 21:52:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.218 21:52:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.218 [2024-07-24 21:52:15.334400] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:36.218 [2024-07-24 21:52:15.334428] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:36.218 [2024-07-24 21:52:15.334441] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x251d350 00:04:36.218 [2024-07-24 21:52:15.334449] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:36.218 [2024-07-24 21:52:15.335357] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:36.218 [2024-07-24 21:52:15.335379] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:36.218 Passthru0 00:04:36.218 21:52:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.218 21:52:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:36.218 21:52:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.218 21:52:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.218 21:52:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.218 21:52:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:36.218 { 00:04:36.218 "name": "Malloc2", 00:04:36.218 "aliases": [ 00:04:36.218 "de5238e7-c6b3-422c-916d-548b781794be" 00:04:36.218 ], 00:04:36.218 "product_name": "Malloc disk", 00:04:36.218 "block_size": 512, 00:04:36.218 "num_blocks": 16384, 00:04:36.218 "uuid": "de5238e7-c6b3-422c-916d-548b781794be", 00:04:36.218 "assigned_rate_limits": { 00:04:36.218 "rw_ios_per_sec": 0, 00:04:36.218 "rw_mbytes_per_sec": 0, 00:04:36.218 "r_mbytes_per_sec": 0, 00:04:36.218 "w_mbytes_per_sec": 0 00:04:36.218 }, 00:04:36.218 "claimed": true, 00:04:36.218 "claim_type": "exclusive_write", 00:04:36.218 "zoned": false, 00:04:36.218 "supported_io_types": { 00:04:36.218 "read": true, 00:04:36.218 "write": true, 00:04:36.218 "unmap": true, 00:04:36.218 "flush": true, 00:04:36.218 "reset": true, 00:04:36.218 "nvme_admin": false, 00:04:36.218 "nvme_io": false, 00:04:36.218 "nvme_io_md": false, 00:04:36.218 "write_zeroes": true, 00:04:36.218 "zcopy": true, 00:04:36.218 "get_zone_info": false, 00:04:36.218 "zone_management": false, 00:04:36.218 "zone_append": false, 00:04:36.218 "compare": false, 00:04:36.218 "compare_and_write": false, 00:04:36.218 "abort": true, 00:04:36.218 "seek_hole": false, 00:04:36.218 "seek_data": false, 00:04:36.218 "copy": true, 00:04:36.218 "nvme_iov_md": false 00:04:36.218 }, 00:04:36.218 "memory_domains": [ 00:04:36.218 { 00:04:36.218 "dma_device_id": "system", 00:04:36.218 "dma_device_type": 1 00:04:36.218 }, 00:04:36.218 { 00:04:36.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:36.218 "dma_device_type": 2 00:04:36.218 } 00:04:36.218 ], 00:04:36.218 "driver_specific": {} 00:04:36.218 }, 00:04:36.218 { 00:04:36.218 "name": "Passthru0", 00:04:36.218 "aliases": [ 00:04:36.218 "cd76744f-7142-5a6a-8ad6-c53f6dd7fafd" 00:04:36.218 ], 00:04:36.218 "product_name": "passthru", 00:04:36.218 "block_size": 512, 00:04:36.218 "num_blocks": 16384, 00:04:36.218 "uuid": "cd76744f-7142-5a6a-8ad6-c53f6dd7fafd", 00:04:36.218 "assigned_rate_limits": { 00:04:36.218 "rw_ios_per_sec": 0, 00:04:36.218 "rw_mbytes_per_sec": 0, 00:04:36.218 "r_mbytes_per_sec": 0, 00:04:36.218 "w_mbytes_per_sec": 0 00:04:36.218 }, 00:04:36.218 "claimed": false, 00:04:36.218 "zoned": false, 00:04:36.218 "supported_io_types": { 00:04:36.218 "read": true, 00:04:36.218 "write": true, 00:04:36.218 "unmap": true, 00:04:36.218 "flush": true, 00:04:36.218 "reset": true, 00:04:36.218 "nvme_admin": false, 00:04:36.218 "nvme_io": false, 00:04:36.218 "nvme_io_md": false, 00:04:36.218 "write_zeroes": true, 00:04:36.218 "zcopy": true, 00:04:36.218 "get_zone_info": false, 00:04:36.218 "zone_management": false, 00:04:36.218 "zone_append": false, 00:04:36.218 "compare": false, 00:04:36.218 "compare_and_write": false, 00:04:36.218 "abort": true, 00:04:36.218 "seek_hole": false, 00:04:36.218 "seek_data": false, 00:04:36.218 "copy": true, 00:04:36.218 "nvme_iov_md": false 00:04:36.218 }, 00:04:36.218 "memory_domains": [ 00:04:36.218 { 00:04:36.218 "dma_device_id": "system", 00:04:36.218 "dma_device_type": 1 00:04:36.218 }, 00:04:36.218 { 00:04:36.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:36.218 "dma_device_type": 2 00:04:36.218 } 00:04:36.218 ], 00:04:36.218 "driver_specific": { 00:04:36.218 "passthru": { 00:04:36.218 "name": "Passthru0", 00:04:36.218 "base_bdev_name": "Malloc2" 00:04:36.218 } 00:04:36.218 } 00:04:36.218 } 00:04:36.218 ]' 00:04:36.218 21:52:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:36.218 21:52:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:36.218 21:52:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:36.218 21:52:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.218 21:52:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.218 21:52:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.218 21:52:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:36.218 21:52:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.218 21:52:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.478 21:52:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.478 21:52:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:36.478 21:52:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.478 21:52:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.478 21:52:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.478 21:52:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:36.478 21:52:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:36.478 21:52:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:36.478 00:04:36.478 real 0m0.289s 00:04:36.478 user 0m0.173s 00:04:36.478 sys 0m0.048s 00:04:36.478 21:52:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:36.478 21:52:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.478 ************************************ 00:04:36.478 END TEST rpc_daemon_integrity 00:04:36.478 ************************************ 00:04:36.478 21:52:15 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:36.478 21:52:15 rpc -- rpc/rpc.sh@84 -- # killprocess 2498076 00:04:36.478 21:52:15 rpc -- common/autotest_common.sh@950 -- # '[' -z 2498076 ']' 00:04:36.478 21:52:15 rpc -- common/autotest_common.sh@954 -- # kill -0 2498076 00:04:36.478 21:52:15 rpc -- common/autotest_common.sh@955 -- # uname 00:04:36.478 21:52:15 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:36.478 21:52:15 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2498076 00:04:36.478 21:52:15 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:36.478 21:52:15 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:36.478 21:52:15 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2498076' 00:04:36.478 killing process with pid 2498076 00:04:36.478 21:52:15 rpc -- common/autotest_common.sh@969 -- # kill 2498076 00:04:36.478 21:52:15 rpc -- common/autotest_common.sh@974 -- # wait 2498076 00:04:36.737 00:04:36.737 real 0m2.520s 00:04:36.737 user 0m3.173s 00:04:36.737 sys 0m0.778s 00:04:36.737 21:52:15 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:36.737 21:52:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.737 ************************************ 00:04:36.737 END TEST rpc 00:04:36.737 ************************************ 00:04:36.738 21:52:15 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:36.738 21:52:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:36.738 21:52:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.738 21:52:15 -- common/autotest_common.sh@10 -- # set +x 00:04:36.997 ************************************ 00:04:36.997 START TEST skip_rpc 00:04:36.997 ************************************ 00:04:36.997 21:52:15 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:36.997 * Looking for test storage... 00:04:36.997 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:36.997 21:52:16 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:36.997 21:52:16 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:36.997 21:52:16 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:36.997 21:52:16 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:36.997 21:52:16 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.997 21:52:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.997 ************************************ 00:04:36.997 START TEST skip_rpc 00:04:36.997 ************************************ 00:04:36.997 21:52:16 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:36.997 21:52:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2498775 00:04:36.997 21:52:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:36.997 21:52:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:36.997 21:52:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:36.997 [2024-07-24 21:52:16.166874] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:04:36.997 [2024-07-24 21:52:16.166917] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2498775 ] 00:04:36.997 EAL: No free 2048 kB hugepages reported on node 1 00:04:37.256 [2024-07-24 21:52:16.234632] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.256 [2024-07-24 21:52:16.302599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.542 21:52:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:42.542 21:52:21 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:42.542 21:52:21 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:42.542 21:52:21 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:42.542 21:52:21 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:42.542 21:52:21 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:42.542 21:52:21 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:42.542 21:52:21 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:42.542 21:52:21 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.542 21:52:21 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.542 21:52:21 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:42.542 21:52:21 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:42.542 21:52:21 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:42.542 21:52:21 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:42.542 21:52:21 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:42.542 21:52:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:42.542 21:52:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2498775 00:04:42.542 21:52:21 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 2498775 ']' 00:04:42.542 21:52:21 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 2498775 00:04:42.542 21:52:21 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:42.542 21:52:21 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:42.542 21:52:21 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2498775 00:04:42.542 21:52:21 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:42.542 21:52:21 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:42.542 21:52:21 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2498775' 00:04:42.542 killing process with pid 2498775 00:04:42.542 21:52:21 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 2498775 00:04:42.542 21:52:21 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 2498775 00:04:42.542 00:04:42.542 real 0m5.368s 00:04:42.542 user 0m5.120s 00:04:42.542 sys 0m0.286s 00:04:42.542 21:52:21 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.542 21:52:21 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.542 ************************************ 00:04:42.542 END TEST skip_rpc 00:04:42.542 ************************************ 00:04:42.542 21:52:21 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:42.542 21:52:21 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:42.542 21:52:21 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.542 21:52:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.542 ************************************ 00:04:42.542 START TEST skip_rpc_with_json 00:04:42.542 ************************************ 00:04:42.542 21:52:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:42.542 21:52:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:42.542 21:52:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2499648 00:04:42.542 21:52:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:42.542 21:52:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:42.542 21:52:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2499648 00:04:42.542 21:52:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 2499648 ']' 00:04:42.542 21:52:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.542 21:52:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:42.542 21:52:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.542 21:52:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:42.543 21:52:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.543 [2024-07-24 21:52:21.621799] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:04:42.543 [2024-07-24 21:52:21.621844] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2499648 ] 00:04:42.543 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.543 [2024-07-24 21:52:21.690751] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.802 [2024-07-24 21:52:21.764374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.370 21:52:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:43.370 21:52:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:43.370 21:52:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:43.370 21:52:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.370 21:52:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:43.370 [2024-07-24 21:52:22.414913] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:43.370 request: 00:04:43.370 { 00:04:43.370 "trtype": "tcp", 00:04:43.370 "method": "nvmf_get_transports", 00:04:43.370 "req_id": 1 00:04:43.370 } 00:04:43.370 Got JSON-RPC error response 00:04:43.370 response: 00:04:43.370 { 00:04:43.370 "code": -19, 00:04:43.370 "message": "No such device" 00:04:43.370 } 00:04:43.370 21:52:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:43.370 21:52:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:43.370 21:52:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.370 21:52:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:43.370 [2024-07-24 21:52:22.427011] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:43.370 21:52:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.370 21:52:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:43.370 21:52:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.370 21:52:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:43.629 21:52:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.629 21:52:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:43.629 { 00:04:43.629 "subsystems": [ 00:04:43.629 { 00:04:43.629 "subsystem": "vfio_user_target", 00:04:43.629 "config": null 00:04:43.629 }, 00:04:43.629 { 00:04:43.629 "subsystem": "keyring", 00:04:43.629 "config": [] 00:04:43.629 }, 00:04:43.629 { 00:04:43.629 "subsystem": "iobuf", 00:04:43.629 "config": [ 00:04:43.629 { 00:04:43.629 "method": "iobuf_set_options", 00:04:43.629 "params": { 00:04:43.629 "small_pool_count": 8192, 00:04:43.629 "large_pool_count": 1024, 00:04:43.629 "small_bufsize": 8192, 00:04:43.629 "large_bufsize": 135168 00:04:43.629 } 00:04:43.629 } 00:04:43.629 ] 00:04:43.629 }, 00:04:43.629 { 00:04:43.629 "subsystem": "sock", 00:04:43.629 "config": [ 00:04:43.629 { 00:04:43.629 "method": "sock_set_default_impl", 00:04:43.629 "params": { 00:04:43.629 "impl_name": "posix" 00:04:43.629 } 00:04:43.629 }, 00:04:43.629 { 00:04:43.629 "method": "sock_impl_set_options", 00:04:43.629 "params": { 00:04:43.629 "impl_name": "ssl", 00:04:43.629 "recv_buf_size": 4096, 00:04:43.629 "send_buf_size": 4096, 00:04:43.629 "enable_recv_pipe": true, 00:04:43.629 "enable_quickack": false, 00:04:43.629 "enable_placement_id": 0, 00:04:43.629 "enable_zerocopy_send_server": true, 00:04:43.629 "enable_zerocopy_send_client": false, 00:04:43.629 "zerocopy_threshold": 0, 00:04:43.629 "tls_version": 0, 00:04:43.629 "enable_ktls": false 00:04:43.629 } 00:04:43.629 }, 00:04:43.629 { 00:04:43.629 "method": "sock_impl_set_options", 00:04:43.629 "params": { 00:04:43.629 "impl_name": "posix", 00:04:43.629 "recv_buf_size": 2097152, 00:04:43.629 "send_buf_size": 2097152, 00:04:43.629 "enable_recv_pipe": true, 00:04:43.629 "enable_quickack": false, 00:04:43.629 "enable_placement_id": 0, 00:04:43.629 "enable_zerocopy_send_server": true, 00:04:43.629 "enable_zerocopy_send_client": false, 00:04:43.629 "zerocopy_threshold": 0, 00:04:43.629 "tls_version": 0, 00:04:43.629 "enable_ktls": false 00:04:43.629 } 00:04:43.629 } 00:04:43.629 ] 00:04:43.629 }, 00:04:43.629 { 00:04:43.629 "subsystem": "vmd", 00:04:43.629 "config": [] 00:04:43.629 }, 00:04:43.629 { 00:04:43.629 "subsystem": "accel", 00:04:43.629 "config": [ 00:04:43.629 { 00:04:43.629 "method": "accel_set_options", 00:04:43.629 "params": { 00:04:43.629 "small_cache_size": 128, 00:04:43.629 "large_cache_size": 16, 00:04:43.629 "task_count": 2048, 00:04:43.629 "sequence_count": 2048, 00:04:43.629 "buf_count": 2048 00:04:43.629 } 00:04:43.629 } 00:04:43.629 ] 00:04:43.629 }, 00:04:43.629 { 00:04:43.629 "subsystem": "bdev", 00:04:43.629 "config": [ 00:04:43.629 { 00:04:43.629 "method": "bdev_set_options", 00:04:43.630 "params": { 00:04:43.630 "bdev_io_pool_size": 65535, 00:04:43.630 "bdev_io_cache_size": 256, 00:04:43.630 "bdev_auto_examine": true, 00:04:43.630 "iobuf_small_cache_size": 128, 00:04:43.630 "iobuf_large_cache_size": 16 00:04:43.630 } 00:04:43.630 }, 00:04:43.630 { 00:04:43.630 "method": "bdev_raid_set_options", 00:04:43.630 "params": { 00:04:43.630 "process_window_size_kb": 1024, 00:04:43.630 "process_max_bandwidth_mb_sec": 0 00:04:43.630 } 00:04:43.630 }, 00:04:43.630 { 00:04:43.630 "method": "bdev_iscsi_set_options", 00:04:43.630 "params": { 00:04:43.630 "timeout_sec": 30 00:04:43.630 } 00:04:43.630 }, 00:04:43.630 { 00:04:43.630 "method": "bdev_nvme_set_options", 00:04:43.630 "params": { 00:04:43.630 "action_on_timeout": "none", 00:04:43.630 "timeout_us": 0, 00:04:43.630 "timeout_admin_us": 0, 00:04:43.630 "keep_alive_timeout_ms": 10000, 00:04:43.630 "arbitration_burst": 0, 00:04:43.630 "low_priority_weight": 0, 00:04:43.630 "medium_priority_weight": 0, 00:04:43.630 "high_priority_weight": 0, 00:04:43.630 "nvme_adminq_poll_period_us": 10000, 00:04:43.630 "nvme_ioq_poll_period_us": 0, 00:04:43.630 "io_queue_requests": 0, 00:04:43.630 "delay_cmd_submit": true, 00:04:43.630 "transport_retry_count": 4, 00:04:43.630 "bdev_retry_count": 3, 00:04:43.630 "transport_ack_timeout": 0, 00:04:43.630 "ctrlr_loss_timeout_sec": 0, 00:04:43.630 "reconnect_delay_sec": 0, 00:04:43.630 "fast_io_fail_timeout_sec": 0, 00:04:43.630 "disable_auto_failback": false, 00:04:43.630 "generate_uuids": false, 00:04:43.630 "transport_tos": 0, 00:04:43.630 "nvme_error_stat": false, 00:04:43.630 "rdma_srq_size": 0, 00:04:43.630 "io_path_stat": false, 00:04:43.630 "allow_accel_sequence": false, 00:04:43.630 "rdma_max_cq_size": 0, 00:04:43.630 "rdma_cm_event_timeout_ms": 0, 00:04:43.630 "dhchap_digests": [ 00:04:43.630 "sha256", 00:04:43.630 "sha384", 00:04:43.630 "sha512" 00:04:43.630 ], 00:04:43.630 "dhchap_dhgroups": [ 00:04:43.630 "null", 00:04:43.630 "ffdhe2048", 00:04:43.630 "ffdhe3072", 00:04:43.630 "ffdhe4096", 00:04:43.630 "ffdhe6144", 00:04:43.630 "ffdhe8192" 00:04:43.630 ] 00:04:43.630 } 00:04:43.630 }, 00:04:43.630 { 00:04:43.630 "method": "bdev_nvme_set_hotplug", 00:04:43.630 "params": { 00:04:43.630 "period_us": 100000, 00:04:43.630 "enable": false 00:04:43.630 } 00:04:43.630 }, 00:04:43.630 { 00:04:43.630 "method": "bdev_wait_for_examine" 00:04:43.630 } 00:04:43.630 ] 00:04:43.630 }, 00:04:43.630 { 00:04:43.630 "subsystem": "scsi", 00:04:43.630 "config": null 00:04:43.630 }, 00:04:43.630 { 00:04:43.630 "subsystem": "scheduler", 00:04:43.630 "config": [ 00:04:43.630 { 00:04:43.630 "method": "framework_set_scheduler", 00:04:43.630 "params": { 00:04:43.630 "name": "static" 00:04:43.630 } 00:04:43.630 } 00:04:43.630 ] 00:04:43.630 }, 00:04:43.630 { 00:04:43.630 "subsystem": "vhost_scsi", 00:04:43.630 "config": [] 00:04:43.630 }, 00:04:43.630 { 00:04:43.630 "subsystem": "vhost_blk", 00:04:43.630 "config": [] 00:04:43.630 }, 00:04:43.630 { 00:04:43.630 "subsystem": "ublk", 00:04:43.630 "config": [] 00:04:43.630 }, 00:04:43.630 { 00:04:43.630 "subsystem": "nbd", 00:04:43.630 "config": [] 00:04:43.630 }, 00:04:43.630 { 00:04:43.630 "subsystem": "nvmf", 00:04:43.630 "config": [ 00:04:43.630 { 00:04:43.630 "method": "nvmf_set_config", 00:04:43.630 "params": { 00:04:43.630 "discovery_filter": "match_any", 00:04:43.630 "admin_cmd_passthru": { 00:04:43.630 "identify_ctrlr": false 00:04:43.630 } 00:04:43.630 } 00:04:43.630 }, 00:04:43.630 { 00:04:43.630 "method": "nvmf_set_max_subsystems", 00:04:43.630 "params": { 00:04:43.630 "max_subsystems": 1024 00:04:43.630 } 00:04:43.630 }, 00:04:43.630 { 00:04:43.630 "method": "nvmf_set_crdt", 00:04:43.630 "params": { 00:04:43.630 "crdt1": 0, 00:04:43.630 "crdt2": 0, 00:04:43.630 "crdt3": 0 00:04:43.630 } 00:04:43.630 }, 00:04:43.630 { 00:04:43.630 "method": "nvmf_create_transport", 00:04:43.630 "params": { 00:04:43.630 "trtype": "TCP", 00:04:43.630 "max_queue_depth": 128, 00:04:43.630 "max_io_qpairs_per_ctrlr": 127, 00:04:43.630 "in_capsule_data_size": 4096, 00:04:43.630 "max_io_size": 131072, 00:04:43.630 "io_unit_size": 131072, 00:04:43.630 "max_aq_depth": 128, 00:04:43.630 "num_shared_buffers": 511, 00:04:43.630 "buf_cache_size": 4294967295, 00:04:43.630 "dif_insert_or_strip": false, 00:04:43.630 "zcopy": false, 00:04:43.630 "c2h_success": true, 00:04:43.630 "sock_priority": 0, 00:04:43.630 "abort_timeout_sec": 1, 00:04:43.630 "ack_timeout": 0, 00:04:43.630 "data_wr_pool_size": 0 00:04:43.630 } 00:04:43.630 } 00:04:43.630 ] 00:04:43.630 }, 00:04:43.630 { 00:04:43.630 "subsystem": "iscsi", 00:04:43.630 "config": [ 00:04:43.630 { 00:04:43.630 "method": "iscsi_set_options", 00:04:43.630 "params": { 00:04:43.630 "node_base": "iqn.2016-06.io.spdk", 00:04:43.630 "max_sessions": 128, 00:04:43.630 "max_connections_per_session": 2, 00:04:43.630 "max_queue_depth": 64, 00:04:43.630 "default_time2wait": 2, 00:04:43.630 "default_time2retain": 20, 00:04:43.630 "first_burst_length": 8192, 00:04:43.630 "immediate_data": true, 00:04:43.630 "allow_duplicated_isid": false, 00:04:43.630 "error_recovery_level": 0, 00:04:43.630 "nop_timeout": 60, 00:04:43.630 "nop_in_interval": 30, 00:04:43.630 "disable_chap": false, 00:04:43.630 "require_chap": false, 00:04:43.630 "mutual_chap": false, 00:04:43.630 "chap_group": 0, 00:04:43.630 "max_large_datain_per_connection": 64, 00:04:43.630 "max_r2t_per_connection": 4, 00:04:43.630 "pdu_pool_size": 36864, 00:04:43.630 "immediate_data_pool_size": 16384, 00:04:43.630 "data_out_pool_size": 2048 00:04:43.630 } 00:04:43.630 } 00:04:43.630 ] 00:04:43.630 } 00:04:43.630 ] 00:04:43.630 } 00:04:43.630 21:52:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:43.630 21:52:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2499648 00:04:43.630 21:52:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2499648 ']' 00:04:43.630 21:52:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2499648 00:04:43.630 21:52:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:43.630 21:52:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:43.630 21:52:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2499648 00:04:43.630 21:52:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:43.630 21:52:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:43.630 21:52:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2499648' 00:04:43.630 killing process with pid 2499648 00:04:43.630 21:52:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2499648 00:04:43.630 21:52:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2499648 00:04:43.889 21:52:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2499882 00:04:43.889 21:52:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:43.889 21:52:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:49.224 21:52:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2499882 00:04:49.224 21:52:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2499882 ']' 00:04:49.224 21:52:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2499882 00:04:49.224 21:52:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:49.224 21:52:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:49.224 21:52:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2499882 00:04:49.224 21:52:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:49.224 21:52:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:49.224 21:52:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2499882' 00:04:49.224 killing process with pid 2499882 00:04:49.224 21:52:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2499882 00:04:49.224 21:52:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2499882 00:04:49.224 21:52:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:49.224 21:52:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:49.224 00:04:49.224 real 0m6.770s 00:04:49.224 user 0m6.545s 00:04:49.224 sys 0m0.667s 00:04:49.224 21:52:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.224 21:52:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:49.224 ************************************ 00:04:49.224 END TEST skip_rpc_with_json 00:04:49.224 ************************************ 00:04:49.224 21:52:28 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:49.224 21:52:28 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.224 21:52:28 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.224 21:52:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.224 ************************************ 00:04:49.224 START TEST skip_rpc_with_delay 00:04:49.224 ************************************ 00:04:49.224 21:52:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:49.224 21:52:28 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:49.224 21:52:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:49.224 21:52:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:49.224 21:52:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:49.224 21:52:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:49.224 21:52:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:49.224 21:52:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:49.224 21:52:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:49.224 21:52:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:49.224 21:52:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:49.224 21:52:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:49.224 21:52:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:49.484 [2024-07-24 21:52:28.477980] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:49.484 [2024-07-24 21:52:28.478043] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:49.484 21:52:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:49.484 21:52:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:49.484 21:52:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:49.484 21:52:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:49.484 00:04:49.484 real 0m0.070s 00:04:49.484 user 0m0.042s 00:04:49.484 sys 0m0.028s 00:04:49.484 21:52:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.484 21:52:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:49.484 ************************************ 00:04:49.484 END TEST skip_rpc_with_delay 00:04:49.484 ************************************ 00:04:49.484 21:52:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:49.484 21:52:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:49.484 21:52:28 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:49.484 21:52:28 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.484 21:52:28 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.484 21:52:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.484 ************************************ 00:04:49.484 START TEST exit_on_failed_rpc_init 00:04:49.484 ************************************ 00:04:49.484 21:52:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:49.484 21:52:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2500984 00:04:49.484 21:52:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2500984 00:04:49.484 21:52:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:49.484 21:52:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 2500984 ']' 00:04:49.484 21:52:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.484 21:52:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:49.484 21:52:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.484 21:52:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:49.484 21:52:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:49.484 [2024-07-24 21:52:28.629934] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:04:49.484 [2024-07-24 21:52:28.629978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2500984 ] 00:04:49.484 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.743 [2024-07-24 21:52:28.698969] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.743 [2024-07-24 21:52:28.768364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.311 21:52:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:50.311 21:52:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:50.311 21:52:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:50.311 21:52:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:50.311 21:52:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:50.312 21:52:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:50.312 21:52:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.312 21:52:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:50.312 21:52:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.312 21:52:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:50.312 21:52:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.312 21:52:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:50.312 21:52:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.312 21:52:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:50.312 21:52:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:50.312 [2024-07-24 21:52:29.479734] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:04:50.312 [2024-07-24 21:52:29.479794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2501119 ] 00:04:50.312 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.571 [2024-07-24 21:52:29.549831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.571 [2024-07-24 21:52:29.623429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.571 [2024-07-24 21:52:29.623499] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:50.571 [2024-07-24 21:52:29.623510] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:50.571 [2024-07-24 21:52:29.623518] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:50.571 21:52:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:50.571 21:52:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:50.571 21:52:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:50.571 21:52:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:50.571 21:52:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:50.571 21:52:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:50.571 21:52:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:50.571 21:52:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2500984 00:04:50.571 21:52:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 2500984 ']' 00:04:50.571 21:52:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 2500984 00:04:50.571 21:52:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:50.571 21:52:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:50.571 21:52:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2500984 00:04:50.571 21:52:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:50.571 21:52:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:50.571 21:52:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2500984' 00:04:50.571 killing process with pid 2500984 00:04:50.571 21:52:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 2500984 00:04:50.571 21:52:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 2500984 00:04:51.139 00:04:51.139 real 0m1.486s 00:04:51.139 user 0m1.692s 00:04:51.139 sys 0m0.446s 00:04:51.139 21:52:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:51.139 21:52:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:51.139 ************************************ 00:04:51.139 END TEST exit_on_failed_rpc_init 00:04:51.139 ************************************ 00:04:51.139 21:52:30 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:51.139 00:04:51.139 real 0m14.144s 00:04:51.139 user 0m13.563s 00:04:51.139 sys 0m1.752s 00:04:51.139 21:52:30 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:51.139 21:52:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.139 ************************************ 00:04:51.139 END TEST skip_rpc 00:04:51.139 ************************************ 00:04:51.139 21:52:30 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:51.139 21:52:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:51.139 21:52:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:51.139 21:52:30 -- common/autotest_common.sh@10 -- # set +x 00:04:51.139 ************************************ 00:04:51.139 START TEST rpc_client 00:04:51.139 ************************************ 00:04:51.139 21:52:30 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:51.139 * Looking for test storage... 00:04:51.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:51.140 21:52:30 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:51.140 OK 00:04:51.140 21:52:30 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:51.140 00:04:51.140 real 0m0.137s 00:04:51.140 user 0m0.060s 00:04:51.140 sys 0m0.087s 00:04:51.140 21:52:30 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:51.140 21:52:30 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:51.140 ************************************ 00:04:51.140 END TEST rpc_client 00:04:51.140 ************************************ 00:04:51.398 21:52:30 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:51.398 21:52:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:51.398 21:52:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:51.398 21:52:30 -- common/autotest_common.sh@10 -- # set +x 00:04:51.398 ************************************ 00:04:51.398 START TEST json_config 00:04:51.398 ************************************ 00:04:51.398 21:52:30 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:51.398 21:52:30 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:51.398 21:52:30 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:51.398 21:52:30 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:51.398 21:52:30 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:51.398 21:52:30 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:51.398 21:52:30 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:51.398 21:52:30 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:51.398 21:52:30 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:51.398 21:52:30 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:51.398 21:52:30 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:51.398 21:52:30 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:51.398 21:52:30 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:51.398 21:52:30 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:04:51.398 21:52:30 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:04:51.398 21:52:30 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:51.398 21:52:30 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:51.398 21:52:30 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:51.398 21:52:30 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:51.398 21:52:30 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:51.398 21:52:30 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:51.398 21:52:30 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:51.398 21:52:30 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:51.398 21:52:30 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.399 21:52:30 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.399 21:52:30 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.399 21:52:30 json_config -- paths/export.sh@5 -- # export PATH 00:04:51.399 21:52:30 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.399 21:52:30 json_config -- nvmf/common.sh@47 -- # : 0 00:04:51.399 21:52:30 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:51.399 21:52:30 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:51.399 21:52:30 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:51.399 21:52:30 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:51.399 21:52:30 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:51.399 21:52:30 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:51.399 21:52:30 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:51.399 21:52:30 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:51.399 21:52:30 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:51.399 21:52:30 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:51.399 21:52:30 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:51.399 21:52:30 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:51.399 21:52:30 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:51.399 21:52:30 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:51.399 21:52:30 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:51.399 21:52:30 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:51.399 21:52:30 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:51.399 21:52:30 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:51.399 21:52:30 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:51.399 21:52:30 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:51.399 21:52:30 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:51.399 21:52:30 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:51.399 21:52:30 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:51.399 21:52:30 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:04:51.399 INFO: JSON configuration test init 00:04:51.399 21:52:30 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:04:51.399 21:52:30 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:04:51.399 21:52:30 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:51.399 21:52:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.399 21:52:30 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:04:51.399 21:52:30 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:51.399 21:52:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.399 21:52:30 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:04:51.399 21:52:30 json_config -- json_config/common.sh@9 -- # local app=target 00:04:51.399 21:52:30 json_config -- json_config/common.sh@10 -- # shift 00:04:51.399 21:52:30 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:51.399 21:52:30 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:51.399 21:52:30 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:51.399 21:52:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:51.399 21:52:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:51.399 21:52:30 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2501369 00:04:51.399 21:52:30 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:51.399 Waiting for target to run... 00:04:51.399 21:52:30 json_config -- json_config/common.sh@25 -- # waitforlisten 2501369 /var/tmp/spdk_tgt.sock 00:04:51.399 21:52:30 json_config -- common/autotest_common.sh@831 -- # '[' -z 2501369 ']' 00:04:51.399 21:52:30 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:51.399 21:52:30 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:51.399 21:52:30 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:51.399 21:52:30 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:51.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:51.399 21:52:30 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:51.399 21:52:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.399 [2024-07-24 21:52:30.579637] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:04:51.399 [2024-07-24 21:52:30.579688] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2501369 ] 00:04:51.399 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.657 [2024-07-24 21:52:30.868184] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.916 [2024-07-24 21:52:30.934919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.174 21:52:31 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:52.174 21:52:31 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:52.174 21:52:31 json_config -- json_config/common.sh@26 -- # echo '' 00:04:52.174 00:04:52.174 21:52:31 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:04:52.174 21:52:31 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:04:52.174 21:52:31 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:52.174 21:52:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.174 21:52:31 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:04:52.174 21:52:31 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:04:52.174 21:52:31 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:52.174 21:52:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.433 21:52:31 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:52.433 21:52:31 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:04:52.433 21:52:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:55.720 21:52:34 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:04:55.720 21:52:34 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:55.720 21:52:34 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:55.720 21:52:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.720 21:52:34 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:55.720 21:52:34 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:55.720 21:52:34 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:55.720 21:52:34 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:55.720 21:52:34 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:55.720 21:52:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:55.720 21:52:34 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:55.720 21:52:34 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:55.720 21:52:34 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:04:55.720 21:52:34 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:04:55.720 21:52:34 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:04:55.720 21:52:34 json_config -- json_config/json_config.sh@51 -- # sort 00:04:55.720 21:52:34 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:04:55.720 21:52:34 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:04:55.720 21:52:34 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:04:55.720 21:52:34 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:04:55.720 21:52:34 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:55.720 21:52:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.720 21:52:34 json_config -- json_config/json_config.sh@59 -- # return 0 00:04:55.720 21:52:34 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:55.720 21:52:34 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:55.720 21:52:34 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:04:55.720 21:52:34 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:04:55.720 21:52:34 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:04:55.720 21:52:34 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:04:55.720 21:52:34 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:55.720 21:52:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.720 21:52:34 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:55.720 21:52:34 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:04:55.720 21:52:34 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:04:55.720 21:52:34 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:55.720 21:52:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:55.720 MallocForNvmf0 00:04:55.720 21:52:34 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:55.720 21:52:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:55.978 MallocForNvmf1 00:04:55.978 21:52:35 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:55.978 21:52:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:56.238 [2024-07-24 21:52:35.218446] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:56.238 21:52:35 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:56.238 21:52:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:56.238 21:52:35 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:56.238 21:52:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:56.496 21:52:35 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:56.496 21:52:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:56.755 21:52:35 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:56.755 21:52:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:56.755 [2024-07-24 21:52:35.896564] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:56.755 21:52:35 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:04:56.755 21:52:35 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:56.755 21:52:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.755 21:52:35 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:04:56.755 21:52:35 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:56.755 21:52:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.015 21:52:35 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:04:57.015 21:52:35 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:57.015 21:52:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:57.015 MallocBdevForConfigChangeCheck 00:04:57.015 21:52:36 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:04:57.015 21:52:36 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:57.015 21:52:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.015 21:52:36 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:04:57.015 21:52:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:57.584 21:52:36 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:04:57.584 INFO: shutting down applications... 00:04:57.584 21:52:36 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:04:57.584 21:52:36 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:04:57.584 21:52:36 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:04:57.584 21:52:36 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:59.490 Calling clear_iscsi_subsystem 00:04:59.490 Calling clear_nvmf_subsystem 00:04:59.490 Calling clear_nbd_subsystem 00:04:59.490 Calling clear_ublk_subsystem 00:04:59.490 Calling clear_vhost_blk_subsystem 00:04:59.490 Calling clear_vhost_scsi_subsystem 00:04:59.490 Calling clear_bdev_subsystem 00:04:59.490 21:52:38 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:59.490 21:52:38 json_config -- json_config/json_config.sh@347 -- # count=100 00:04:59.490 21:52:38 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:04:59.490 21:52:38 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:59.490 21:52:38 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:59.490 21:52:38 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:00.060 21:52:38 json_config -- json_config/json_config.sh@349 -- # break 00:05:00.060 21:52:38 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:05:00.060 21:52:38 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:05:00.060 21:52:38 json_config -- json_config/common.sh@31 -- # local app=target 00:05:00.060 21:52:38 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:00.060 21:52:38 json_config -- json_config/common.sh@35 -- # [[ -n 2501369 ]] 00:05:00.060 21:52:38 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2501369 00:05:00.060 21:52:38 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:00.060 21:52:38 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:00.060 21:52:38 json_config -- json_config/common.sh@41 -- # kill -0 2501369 00:05:00.060 21:52:38 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:00.319 21:52:39 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:00.319 21:52:39 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:00.319 21:52:39 json_config -- json_config/common.sh@41 -- # kill -0 2501369 00:05:00.319 21:52:39 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:00.319 21:52:39 json_config -- json_config/common.sh@43 -- # break 00:05:00.319 21:52:39 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:00.319 21:52:39 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:00.319 SPDK target shutdown done 00:05:00.319 21:52:39 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:05:00.320 INFO: relaunching applications... 00:05:00.320 21:52:39 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:00.320 21:52:39 json_config -- json_config/common.sh@9 -- # local app=target 00:05:00.320 21:52:39 json_config -- json_config/common.sh@10 -- # shift 00:05:00.320 21:52:39 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:00.320 21:52:39 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:00.320 21:52:39 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:00.320 21:52:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:00.320 21:52:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:00.320 21:52:39 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2503082 00:05:00.320 21:52:39 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:00.320 Waiting for target to run... 00:05:00.320 21:52:39 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:00.320 21:52:39 json_config -- json_config/common.sh@25 -- # waitforlisten 2503082 /var/tmp/spdk_tgt.sock 00:05:00.320 21:52:39 json_config -- common/autotest_common.sh@831 -- # '[' -z 2503082 ']' 00:05:00.320 21:52:39 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:00.320 21:52:39 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:00.320 21:52:39 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:00.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:00.320 21:52:39 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:00.320 21:52:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.579 [2024-07-24 21:52:39.550182] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:05:00.579 [2024-07-24 21:52:39.550236] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2503082 ] 00:05:00.579 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.839 [2024-07-24 21:52:39.981009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.098 [2024-07-24 21:52:40.068437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.390 [2024-07-24 21:52:43.100640] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:04.390 [2024-07-24 21:52:43.133018] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:04.662 21:52:43 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:04.663 21:52:43 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:04.663 21:52:43 json_config -- json_config/common.sh@26 -- # echo '' 00:05:04.663 00:05:04.663 21:52:43 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:05:04.663 21:52:43 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:04.663 INFO: Checking if target configuration is the same... 00:05:04.663 21:52:43 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:04.663 21:52:43 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:05:04.663 21:52:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:04.663 + '[' 2 -ne 2 ']' 00:05:04.663 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:04.663 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:04.663 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:04.663 +++ basename /dev/fd/62 00:05:04.663 ++ mktemp /tmp/62.XXX 00:05:04.663 + tmp_file_1=/tmp/62.EqS 00:05:04.663 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:04.663 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:04.663 + tmp_file_2=/tmp/spdk_tgt_config.json.QXc 00:05:04.663 + ret=0 00:05:04.663 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:04.934 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:04.934 + diff -u /tmp/62.EqS /tmp/spdk_tgt_config.json.QXc 00:05:04.934 + echo 'INFO: JSON config files are the same' 00:05:04.934 INFO: JSON config files are the same 00:05:04.934 + rm /tmp/62.EqS /tmp/spdk_tgt_config.json.QXc 00:05:04.934 + exit 0 00:05:04.934 21:52:44 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:05:04.934 21:52:44 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:04.934 INFO: changing configuration and checking if this can be detected... 00:05:04.934 21:52:44 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:04.934 21:52:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:05.192 21:52:44 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:05.192 21:52:44 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:05:05.192 21:52:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:05.192 + '[' 2 -ne 2 ']' 00:05:05.193 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:05.193 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:05.193 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:05.193 +++ basename /dev/fd/62 00:05:05.193 ++ mktemp /tmp/62.XXX 00:05:05.193 + tmp_file_1=/tmp/62.qun 00:05:05.193 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:05.193 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:05.193 + tmp_file_2=/tmp/spdk_tgt_config.json.QYE 00:05:05.193 + ret=0 00:05:05.193 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:05.494 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:05.494 + diff -u /tmp/62.qun /tmp/spdk_tgt_config.json.QYE 00:05:05.494 + ret=1 00:05:05.494 + echo '=== Start of file: /tmp/62.qun ===' 00:05:05.494 + cat /tmp/62.qun 00:05:05.494 + echo '=== End of file: /tmp/62.qun ===' 00:05:05.494 + echo '' 00:05:05.494 + echo '=== Start of file: /tmp/spdk_tgt_config.json.QYE ===' 00:05:05.494 + cat /tmp/spdk_tgt_config.json.QYE 00:05:05.494 + echo '=== End of file: /tmp/spdk_tgt_config.json.QYE ===' 00:05:05.494 + echo '' 00:05:05.494 + rm /tmp/62.qun /tmp/spdk_tgt_config.json.QYE 00:05:05.494 + exit 1 00:05:05.494 21:52:44 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:05:05.494 INFO: configuration change detected. 00:05:05.494 21:52:44 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:05:05.494 21:52:44 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:05:05.494 21:52:44 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:05.494 21:52:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.494 21:52:44 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:05:05.494 21:52:44 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:05:05.494 21:52:44 json_config -- json_config/json_config.sh@321 -- # [[ -n 2503082 ]] 00:05:05.494 21:52:44 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:05:05.494 21:52:44 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:05:05.494 21:52:44 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:05.494 21:52:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.494 21:52:44 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:05:05.494 21:52:44 json_config -- json_config/json_config.sh@197 -- # uname -s 00:05:05.494 21:52:44 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:05:05.494 21:52:44 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:05:05.494 21:52:44 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:05:05.494 21:52:44 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:05:05.494 21:52:44 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:05.494 21:52:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.494 21:52:44 json_config -- json_config/json_config.sh@327 -- # killprocess 2503082 00:05:05.494 21:52:44 json_config -- common/autotest_common.sh@950 -- # '[' -z 2503082 ']' 00:05:05.494 21:52:44 json_config -- common/autotest_common.sh@954 -- # kill -0 2503082 00:05:05.494 21:52:44 json_config -- common/autotest_common.sh@955 -- # uname 00:05:05.494 21:52:44 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:05.494 21:52:44 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2503082 00:05:05.494 21:52:44 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:05.494 21:52:44 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:05.494 21:52:44 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2503082' 00:05:05.494 killing process with pid 2503082 00:05:05.494 21:52:44 json_config -- common/autotest_common.sh@969 -- # kill 2503082 00:05:05.494 21:52:44 json_config -- common/autotest_common.sh@974 -- # wait 2503082 00:05:08.032 21:52:46 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:08.032 21:52:46 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:05:08.032 21:52:46 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:08.032 21:52:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.032 21:52:46 json_config -- json_config/json_config.sh@332 -- # return 0 00:05:08.032 21:52:46 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:05:08.032 INFO: Success 00:05:08.032 00:05:08.032 real 0m16.325s 00:05:08.032 user 0m16.807s 00:05:08.032 sys 0m2.199s 00:05:08.032 21:52:46 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:08.032 21:52:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.032 ************************************ 00:05:08.032 END TEST json_config 00:05:08.032 ************************************ 00:05:08.032 21:52:46 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:08.032 21:52:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:08.032 21:52:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:08.032 21:52:46 -- common/autotest_common.sh@10 -- # set +x 00:05:08.032 ************************************ 00:05:08.032 START TEST json_config_extra_key 00:05:08.032 ************************************ 00:05:08.032 21:52:46 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:08.032 21:52:46 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:08.032 21:52:46 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:08.032 21:52:46 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:08.032 21:52:46 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:08.032 21:52:46 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:08.032 21:52:46 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:08.032 21:52:46 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:08.032 21:52:46 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:08.032 21:52:46 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:08.032 21:52:46 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:08.032 21:52:46 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:08.032 21:52:46 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:08.032 21:52:46 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:05:08.032 21:52:46 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:05:08.032 21:52:46 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:08.032 21:52:46 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:08.032 21:52:46 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:08.032 21:52:46 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:08.032 21:52:46 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:08.032 21:52:46 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:08.032 21:52:46 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:08.032 21:52:46 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:08.032 21:52:46 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.032 21:52:46 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.032 21:52:46 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.032 21:52:46 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:08.032 21:52:46 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.032 21:52:46 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:08.032 21:52:46 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:08.032 21:52:46 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:08.032 21:52:46 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:08.032 21:52:46 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:08.032 21:52:46 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:08.032 21:52:46 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:08.032 21:52:46 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:08.032 21:52:46 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:08.032 21:52:46 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:08.032 21:52:46 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:08.032 21:52:46 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:08.032 21:52:46 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:08.032 21:52:46 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:08.032 21:52:46 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:08.032 21:52:46 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:08.032 21:52:46 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:08.032 21:52:46 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:08.032 21:52:46 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:08.032 21:52:46 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:08.032 INFO: launching applications... 00:05:08.032 21:52:46 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:08.032 21:52:46 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:08.032 21:52:46 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:08.032 21:52:46 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:08.032 21:52:46 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:08.032 21:52:46 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:08.032 21:52:46 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:08.032 21:52:46 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:08.032 21:52:46 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2504518 00:05:08.032 21:52:46 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:08.032 Waiting for target to run... 00:05:08.032 21:52:46 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2504518 /var/tmp/spdk_tgt.sock 00:05:08.032 21:52:46 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 2504518 ']' 00:05:08.032 21:52:46 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:08.032 21:52:46 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:08.032 21:52:46 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:08.032 21:52:46 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:08.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:08.033 21:52:46 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:08.033 21:52:46 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:08.033 [2024-07-24 21:52:46.984110] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:05:08.033 [2024-07-24 21:52:46.984163] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2504518 ] 00:05:08.033 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.292 [2024-07-24 21:52:47.422437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.551 [2024-07-24 21:52:47.512035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.810 21:52:47 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:08.810 21:52:47 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:08.810 21:52:47 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:08.810 00:05:08.810 21:52:47 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:08.810 INFO: shutting down applications... 00:05:08.810 21:52:47 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:08.810 21:52:47 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:08.810 21:52:47 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:08.810 21:52:47 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2504518 ]] 00:05:08.810 21:52:47 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2504518 00:05:08.811 21:52:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:08.811 21:52:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:08.811 21:52:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2504518 00:05:08.811 21:52:47 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:09.379 21:52:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:09.379 21:52:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:09.379 21:52:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2504518 00:05:09.379 21:52:48 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:09.379 21:52:48 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:09.379 21:52:48 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:09.379 21:52:48 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:09.379 SPDK target shutdown done 00:05:09.379 21:52:48 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:09.379 Success 00:05:09.379 00:05:09.379 real 0m1.467s 00:05:09.379 user 0m1.055s 00:05:09.379 sys 0m0.570s 00:05:09.379 21:52:48 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:09.379 21:52:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:09.379 ************************************ 00:05:09.379 END TEST json_config_extra_key 00:05:09.379 ************************************ 00:05:09.379 21:52:48 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:09.379 21:52:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:09.379 21:52:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:09.379 21:52:48 -- common/autotest_common.sh@10 -- # set +x 00:05:09.379 ************************************ 00:05:09.379 START TEST alias_rpc 00:05:09.379 ************************************ 00:05:09.379 21:52:48 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:09.379 * Looking for test storage... 00:05:09.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:09.379 21:52:48 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:09.379 21:52:48 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2504827 00:05:09.379 21:52:48 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2504827 00:05:09.379 21:52:48 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 2504827 ']' 00:05:09.379 21:52:48 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.379 21:52:48 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:09.379 21:52:48 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.379 21:52:48 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:09.379 21:52:48 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:09.379 21:52:48 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.379 [2024-07-24 21:52:48.526737] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:05:09.379 [2024-07-24 21:52:48.526792] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2504827 ] 00:05:09.379 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.638 [2024-07-24 21:52:48.595755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.638 [2024-07-24 21:52:48.669942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.207 21:52:49 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:10.207 21:52:49 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:10.207 21:52:49 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:10.466 21:52:49 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2504827 00:05:10.466 21:52:49 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 2504827 ']' 00:05:10.466 21:52:49 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 2504827 00:05:10.466 21:52:49 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:10.466 21:52:49 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:10.466 21:52:49 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2504827 00:05:10.466 21:52:49 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:10.466 21:52:49 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:10.466 21:52:49 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2504827' 00:05:10.466 killing process with pid 2504827 00:05:10.466 21:52:49 alias_rpc -- common/autotest_common.sh@969 -- # kill 2504827 00:05:10.466 21:52:49 alias_rpc -- common/autotest_common.sh@974 -- # wait 2504827 00:05:10.726 00:05:10.726 real 0m1.465s 00:05:10.726 user 0m1.540s 00:05:10.726 sys 0m0.426s 00:05:10.726 21:52:49 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.726 21:52:49 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.726 ************************************ 00:05:10.726 END TEST alias_rpc 00:05:10.726 ************************************ 00:05:10.726 21:52:49 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:10.726 21:52:49 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:10.726 21:52:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.726 21:52:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.726 21:52:49 -- common/autotest_common.sh@10 -- # set +x 00:05:10.726 ************************************ 00:05:10.726 START TEST spdkcli_tcp 00:05:10.726 ************************************ 00:05:10.726 21:52:49 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:10.986 * Looking for test storage... 00:05:10.986 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:10.986 21:52:49 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:10.986 21:52:49 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:10.986 21:52:49 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:10.986 21:52:49 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:10.986 21:52:49 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:10.986 21:52:49 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:10.986 21:52:49 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:10.986 21:52:49 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:10.986 21:52:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:10.986 21:52:49 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2505151 00:05:10.986 21:52:49 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2505151 00:05:10.986 21:52:49 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 2505151 ']' 00:05:10.986 21:52:49 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.986 21:52:49 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:10.986 21:52:49 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.986 21:52:49 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:10.986 21:52:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:10.986 21:52:49 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:10.986 [2024-07-24 21:52:50.036451] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:05:10.986 [2024-07-24 21:52:50.036505] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2505151 ] 00:05:10.986 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.986 [2024-07-24 21:52:50.109303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:10.986 [2024-07-24 21:52:50.182983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.986 [2024-07-24 21:52:50.182986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.924 21:52:50 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:11.924 21:52:50 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:11.924 21:52:50 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2505223 00:05:11.924 21:52:50 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:11.924 21:52:50 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:11.924 [ 00:05:11.924 "bdev_malloc_delete", 00:05:11.924 "bdev_malloc_create", 00:05:11.924 "bdev_null_resize", 00:05:11.924 "bdev_null_delete", 00:05:11.924 "bdev_null_create", 00:05:11.924 "bdev_nvme_cuse_unregister", 00:05:11.924 "bdev_nvme_cuse_register", 00:05:11.924 "bdev_opal_new_user", 00:05:11.924 "bdev_opal_set_lock_state", 00:05:11.924 "bdev_opal_delete", 00:05:11.924 "bdev_opal_get_info", 00:05:11.924 "bdev_opal_create", 00:05:11.924 "bdev_nvme_opal_revert", 00:05:11.924 "bdev_nvme_opal_init", 00:05:11.924 "bdev_nvme_send_cmd", 00:05:11.924 "bdev_nvme_get_path_iostat", 00:05:11.924 "bdev_nvme_get_mdns_discovery_info", 00:05:11.924 "bdev_nvme_stop_mdns_discovery", 00:05:11.924 "bdev_nvme_start_mdns_discovery", 00:05:11.924 "bdev_nvme_set_multipath_policy", 00:05:11.924 "bdev_nvme_set_preferred_path", 00:05:11.924 "bdev_nvme_get_io_paths", 00:05:11.924 "bdev_nvme_remove_error_injection", 00:05:11.924 "bdev_nvme_add_error_injection", 00:05:11.924 "bdev_nvme_get_discovery_info", 00:05:11.924 "bdev_nvme_stop_discovery", 00:05:11.925 "bdev_nvme_start_discovery", 00:05:11.925 "bdev_nvme_get_controller_health_info", 00:05:11.925 "bdev_nvme_disable_controller", 00:05:11.925 "bdev_nvme_enable_controller", 00:05:11.925 "bdev_nvme_reset_controller", 00:05:11.925 "bdev_nvme_get_transport_statistics", 00:05:11.925 "bdev_nvme_apply_firmware", 00:05:11.925 "bdev_nvme_detach_controller", 00:05:11.925 "bdev_nvme_get_controllers", 00:05:11.925 "bdev_nvme_attach_controller", 00:05:11.925 "bdev_nvme_set_hotplug", 00:05:11.925 "bdev_nvme_set_options", 00:05:11.925 "bdev_passthru_delete", 00:05:11.925 "bdev_passthru_create", 00:05:11.925 "bdev_lvol_set_parent_bdev", 00:05:11.925 "bdev_lvol_set_parent", 00:05:11.925 "bdev_lvol_check_shallow_copy", 00:05:11.925 "bdev_lvol_start_shallow_copy", 00:05:11.925 "bdev_lvol_grow_lvstore", 00:05:11.925 "bdev_lvol_get_lvols", 00:05:11.925 "bdev_lvol_get_lvstores", 00:05:11.925 "bdev_lvol_delete", 00:05:11.925 "bdev_lvol_set_read_only", 00:05:11.925 "bdev_lvol_resize", 00:05:11.925 "bdev_lvol_decouple_parent", 00:05:11.925 "bdev_lvol_inflate", 00:05:11.925 "bdev_lvol_rename", 00:05:11.925 "bdev_lvol_clone_bdev", 00:05:11.925 "bdev_lvol_clone", 00:05:11.925 "bdev_lvol_snapshot", 00:05:11.925 "bdev_lvol_create", 00:05:11.925 "bdev_lvol_delete_lvstore", 00:05:11.925 "bdev_lvol_rename_lvstore", 00:05:11.925 "bdev_lvol_create_lvstore", 00:05:11.925 "bdev_raid_set_options", 00:05:11.925 "bdev_raid_remove_base_bdev", 00:05:11.925 "bdev_raid_add_base_bdev", 00:05:11.925 "bdev_raid_delete", 00:05:11.925 "bdev_raid_create", 00:05:11.925 "bdev_raid_get_bdevs", 00:05:11.925 "bdev_error_inject_error", 00:05:11.925 "bdev_error_delete", 00:05:11.925 "bdev_error_create", 00:05:11.925 "bdev_split_delete", 00:05:11.925 "bdev_split_create", 00:05:11.925 "bdev_delay_delete", 00:05:11.925 "bdev_delay_create", 00:05:11.925 "bdev_delay_update_latency", 00:05:11.925 "bdev_zone_block_delete", 00:05:11.925 "bdev_zone_block_create", 00:05:11.925 "blobfs_create", 00:05:11.925 "blobfs_detect", 00:05:11.925 "blobfs_set_cache_size", 00:05:11.925 "bdev_aio_delete", 00:05:11.925 "bdev_aio_rescan", 00:05:11.925 "bdev_aio_create", 00:05:11.925 "bdev_ftl_set_property", 00:05:11.925 "bdev_ftl_get_properties", 00:05:11.925 "bdev_ftl_get_stats", 00:05:11.925 "bdev_ftl_unmap", 00:05:11.925 "bdev_ftl_unload", 00:05:11.925 "bdev_ftl_delete", 00:05:11.925 "bdev_ftl_load", 00:05:11.925 "bdev_ftl_create", 00:05:11.925 "bdev_virtio_attach_controller", 00:05:11.925 "bdev_virtio_scsi_get_devices", 00:05:11.925 "bdev_virtio_detach_controller", 00:05:11.925 "bdev_virtio_blk_set_hotplug", 00:05:11.925 "bdev_iscsi_delete", 00:05:11.925 "bdev_iscsi_create", 00:05:11.925 "bdev_iscsi_set_options", 00:05:11.925 "accel_error_inject_error", 00:05:11.925 "ioat_scan_accel_module", 00:05:11.925 "dsa_scan_accel_module", 00:05:11.925 "iaa_scan_accel_module", 00:05:11.925 "vfu_virtio_create_scsi_endpoint", 00:05:11.925 "vfu_virtio_scsi_remove_target", 00:05:11.925 "vfu_virtio_scsi_add_target", 00:05:11.925 "vfu_virtio_create_blk_endpoint", 00:05:11.925 "vfu_virtio_delete_endpoint", 00:05:11.925 "keyring_file_remove_key", 00:05:11.925 "keyring_file_add_key", 00:05:11.925 "keyring_linux_set_options", 00:05:11.925 "iscsi_get_histogram", 00:05:11.925 "iscsi_enable_histogram", 00:05:11.925 "iscsi_set_options", 00:05:11.925 "iscsi_get_auth_groups", 00:05:11.925 "iscsi_auth_group_remove_secret", 00:05:11.925 "iscsi_auth_group_add_secret", 00:05:11.925 "iscsi_delete_auth_group", 00:05:11.925 "iscsi_create_auth_group", 00:05:11.925 "iscsi_set_discovery_auth", 00:05:11.925 "iscsi_get_options", 00:05:11.925 "iscsi_target_node_request_logout", 00:05:11.925 "iscsi_target_node_set_redirect", 00:05:11.925 "iscsi_target_node_set_auth", 00:05:11.925 "iscsi_target_node_add_lun", 00:05:11.925 "iscsi_get_stats", 00:05:11.925 "iscsi_get_connections", 00:05:11.925 "iscsi_portal_group_set_auth", 00:05:11.925 "iscsi_start_portal_group", 00:05:11.925 "iscsi_delete_portal_group", 00:05:11.925 "iscsi_create_portal_group", 00:05:11.925 "iscsi_get_portal_groups", 00:05:11.925 "iscsi_delete_target_node", 00:05:11.925 "iscsi_target_node_remove_pg_ig_maps", 00:05:11.925 "iscsi_target_node_add_pg_ig_maps", 00:05:11.925 "iscsi_create_target_node", 00:05:11.925 "iscsi_get_target_nodes", 00:05:11.925 "iscsi_delete_initiator_group", 00:05:11.925 "iscsi_initiator_group_remove_initiators", 00:05:11.925 "iscsi_initiator_group_add_initiators", 00:05:11.925 "iscsi_create_initiator_group", 00:05:11.925 "iscsi_get_initiator_groups", 00:05:11.925 "nvmf_set_crdt", 00:05:11.925 "nvmf_set_config", 00:05:11.925 "nvmf_set_max_subsystems", 00:05:11.925 "nvmf_stop_mdns_prr", 00:05:11.925 "nvmf_publish_mdns_prr", 00:05:11.925 "nvmf_subsystem_get_listeners", 00:05:11.925 "nvmf_subsystem_get_qpairs", 00:05:11.925 "nvmf_subsystem_get_controllers", 00:05:11.925 "nvmf_get_stats", 00:05:11.925 "nvmf_get_transports", 00:05:11.925 "nvmf_create_transport", 00:05:11.925 "nvmf_get_targets", 00:05:11.925 "nvmf_delete_target", 00:05:11.925 "nvmf_create_target", 00:05:11.925 "nvmf_subsystem_allow_any_host", 00:05:11.925 "nvmf_subsystem_remove_host", 00:05:11.925 "nvmf_subsystem_add_host", 00:05:11.925 "nvmf_ns_remove_host", 00:05:11.925 "nvmf_ns_add_host", 00:05:11.925 "nvmf_subsystem_remove_ns", 00:05:11.925 "nvmf_subsystem_add_ns", 00:05:11.925 "nvmf_subsystem_listener_set_ana_state", 00:05:11.925 "nvmf_discovery_get_referrals", 00:05:11.925 "nvmf_discovery_remove_referral", 00:05:11.925 "nvmf_discovery_add_referral", 00:05:11.925 "nvmf_subsystem_remove_listener", 00:05:11.925 "nvmf_subsystem_add_listener", 00:05:11.925 "nvmf_delete_subsystem", 00:05:11.925 "nvmf_create_subsystem", 00:05:11.925 "nvmf_get_subsystems", 00:05:11.925 "env_dpdk_get_mem_stats", 00:05:11.925 "nbd_get_disks", 00:05:11.925 "nbd_stop_disk", 00:05:11.925 "nbd_start_disk", 00:05:11.925 "ublk_recover_disk", 00:05:11.925 "ublk_get_disks", 00:05:11.925 "ublk_stop_disk", 00:05:11.925 "ublk_start_disk", 00:05:11.925 "ublk_destroy_target", 00:05:11.925 "ublk_create_target", 00:05:11.925 "virtio_blk_create_transport", 00:05:11.925 "virtio_blk_get_transports", 00:05:11.925 "vhost_controller_set_coalescing", 00:05:11.925 "vhost_get_controllers", 00:05:11.925 "vhost_delete_controller", 00:05:11.925 "vhost_create_blk_controller", 00:05:11.925 "vhost_scsi_controller_remove_target", 00:05:11.925 "vhost_scsi_controller_add_target", 00:05:11.925 "vhost_start_scsi_controller", 00:05:11.925 "vhost_create_scsi_controller", 00:05:11.925 "thread_set_cpumask", 00:05:11.925 "framework_get_governor", 00:05:11.925 "framework_get_scheduler", 00:05:11.925 "framework_set_scheduler", 00:05:11.925 "framework_get_reactors", 00:05:11.925 "thread_get_io_channels", 00:05:11.925 "thread_get_pollers", 00:05:11.925 "thread_get_stats", 00:05:11.925 "framework_monitor_context_switch", 00:05:11.925 "spdk_kill_instance", 00:05:11.925 "log_enable_timestamps", 00:05:11.925 "log_get_flags", 00:05:11.925 "log_clear_flag", 00:05:11.925 "log_set_flag", 00:05:11.925 "log_get_level", 00:05:11.925 "log_set_level", 00:05:11.925 "log_get_print_level", 00:05:11.925 "log_set_print_level", 00:05:11.925 "framework_enable_cpumask_locks", 00:05:11.925 "framework_disable_cpumask_locks", 00:05:11.925 "framework_wait_init", 00:05:11.925 "framework_start_init", 00:05:11.925 "scsi_get_devices", 00:05:11.925 "bdev_get_histogram", 00:05:11.925 "bdev_enable_histogram", 00:05:11.925 "bdev_set_qos_limit", 00:05:11.925 "bdev_set_qd_sampling_period", 00:05:11.925 "bdev_get_bdevs", 00:05:11.925 "bdev_reset_iostat", 00:05:11.925 "bdev_get_iostat", 00:05:11.925 "bdev_examine", 00:05:11.925 "bdev_wait_for_examine", 00:05:11.925 "bdev_set_options", 00:05:11.925 "notify_get_notifications", 00:05:11.925 "notify_get_types", 00:05:11.925 "accel_get_stats", 00:05:11.925 "accel_set_options", 00:05:11.925 "accel_set_driver", 00:05:11.925 "accel_crypto_key_destroy", 00:05:11.925 "accel_crypto_keys_get", 00:05:11.925 "accel_crypto_key_create", 00:05:11.925 "accel_assign_opc", 00:05:11.925 "accel_get_module_info", 00:05:11.925 "accel_get_opc_assignments", 00:05:11.925 "vmd_rescan", 00:05:11.925 "vmd_remove_device", 00:05:11.925 "vmd_enable", 00:05:11.925 "sock_get_default_impl", 00:05:11.925 "sock_set_default_impl", 00:05:11.925 "sock_impl_set_options", 00:05:11.925 "sock_impl_get_options", 00:05:11.925 "iobuf_get_stats", 00:05:11.925 "iobuf_set_options", 00:05:11.925 "keyring_get_keys", 00:05:11.925 "framework_get_pci_devices", 00:05:11.925 "framework_get_config", 00:05:11.925 "framework_get_subsystems", 00:05:11.925 "vfu_tgt_set_base_path", 00:05:11.925 "trace_get_info", 00:05:11.925 "trace_get_tpoint_group_mask", 00:05:11.925 "trace_disable_tpoint_group", 00:05:11.925 "trace_enable_tpoint_group", 00:05:11.925 "trace_clear_tpoint_mask", 00:05:11.925 "trace_set_tpoint_mask", 00:05:11.926 "spdk_get_version", 00:05:11.926 "rpc_get_methods" 00:05:11.926 ] 00:05:11.926 21:52:50 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:11.926 21:52:50 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:11.926 21:52:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:11.926 21:52:51 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:11.926 21:52:51 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2505151 00:05:11.926 21:52:51 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 2505151 ']' 00:05:11.926 21:52:51 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 2505151 00:05:11.926 21:52:51 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:11.926 21:52:51 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:11.926 21:52:51 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2505151 00:05:11.926 21:52:51 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:11.926 21:52:51 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:11.926 21:52:51 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2505151' 00:05:11.926 killing process with pid 2505151 00:05:11.926 21:52:51 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 2505151 00:05:11.926 21:52:51 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 2505151 00:05:12.491 00:05:12.491 real 0m1.503s 00:05:12.491 user 0m2.750s 00:05:12.491 sys 0m0.478s 00:05:12.492 21:52:51 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.492 21:52:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:12.492 ************************************ 00:05:12.492 END TEST spdkcli_tcp 00:05:12.492 ************************************ 00:05:12.492 21:52:51 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:12.492 21:52:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.492 21:52:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.492 21:52:51 -- common/autotest_common.sh@10 -- # set +x 00:05:12.492 ************************************ 00:05:12.492 START TEST dpdk_mem_utility 00:05:12.492 ************************************ 00:05:12.492 21:52:51 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:12.492 * Looking for test storage... 00:05:12.492 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:12.492 21:52:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:12.492 21:52:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2505484 00:05:12.492 21:52:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2505484 00:05:12.492 21:52:51 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 2505484 ']' 00:05:12.492 21:52:51 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.492 21:52:51 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:12.492 21:52:51 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.492 21:52:51 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:12.492 21:52:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:12.492 21:52:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:12.492 [2024-07-24 21:52:51.635771] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:05:12.492 [2024-07-24 21:52:51.635821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2505484 ] 00:05:12.492 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.749 [2024-07-24 21:52:51.705762] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.749 [2024-07-24 21:52:51.779325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.315 21:52:52 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:13.315 21:52:52 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:13.315 21:52:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:13.315 21:52:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:13.315 21:52:52 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.315 21:52:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:13.315 { 00:05:13.315 "filename": "/tmp/spdk_mem_dump.txt" 00:05:13.315 } 00:05:13.315 21:52:52 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.315 21:52:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:13.315 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:13.315 1 heaps totaling size 814.000000 MiB 00:05:13.315 size: 814.000000 MiB heap id: 0 00:05:13.315 end heaps---------- 00:05:13.315 8 mempools totaling size 598.116089 MiB 00:05:13.315 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:13.315 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:13.315 size: 84.521057 MiB name: bdev_io_2505484 00:05:13.315 size: 51.011292 MiB name: evtpool_2505484 00:05:13.315 size: 50.003479 MiB name: msgpool_2505484 00:05:13.315 size: 21.763794 MiB name: PDU_Pool 00:05:13.315 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:13.315 size: 0.026123 MiB name: Session_Pool 00:05:13.315 end mempools------- 00:05:13.315 6 memzones totaling size 4.142822 MiB 00:05:13.315 size: 1.000366 MiB name: RG_ring_0_2505484 00:05:13.315 size: 1.000366 MiB name: RG_ring_1_2505484 00:05:13.315 size: 1.000366 MiB name: RG_ring_4_2505484 00:05:13.315 size: 1.000366 MiB name: RG_ring_5_2505484 00:05:13.315 size: 0.125366 MiB name: RG_ring_2_2505484 00:05:13.315 size: 0.015991 MiB name: RG_ring_3_2505484 00:05:13.315 end memzones------- 00:05:13.315 21:52:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:13.573 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:13.573 list of free elements. size: 12.519348 MiB 00:05:13.573 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:13.573 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:13.573 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:13.573 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:13.573 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:13.573 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:13.573 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:13.573 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:13.573 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:13.574 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:13.574 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:13.574 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:13.574 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:13.574 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:13.574 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:13.574 list of standard malloc elements. size: 199.218079 MiB 00:05:13.574 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:13.574 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:13.574 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:13.574 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:13.574 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:13.574 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:13.574 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:13.574 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:13.574 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:13.574 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:13.574 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:13.574 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:13.574 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:13.574 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:13.574 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:13.574 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:13.574 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:13.574 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:13.574 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:13.574 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:13.574 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:13.574 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:13.574 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:13.574 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:13.574 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:13.574 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:13.574 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:13.574 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:13.574 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:13.574 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:13.574 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:13.574 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:13.574 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:13.574 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:13.574 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:13.574 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:13.574 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:13.574 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:13.574 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:13.574 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:13.574 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:13.574 list of memzone associated elements. size: 602.262573 MiB 00:05:13.574 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:13.574 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:13.574 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:13.574 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:13.574 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:13.574 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2505484_0 00:05:13.574 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:13.574 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2505484_0 00:05:13.574 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:13.574 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2505484_0 00:05:13.574 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:13.574 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:13.574 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:13.574 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:13.574 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:13.574 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2505484 00:05:13.574 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:13.574 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2505484 00:05:13.574 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:13.574 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2505484 00:05:13.574 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:13.574 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:13.574 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:13.574 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:13.574 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:13.574 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:13.574 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:13.574 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:13.574 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:13.574 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2505484 00:05:13.574 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:13.574 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2505484 00:05:13.574 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:13.574 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2505484 00:05:13.574 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:13.574 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2505484 00:05:13.574 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:13.574 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2505484 00:05:13.574 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:13.574 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:13.574 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:13.574 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:13.574 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:13.574 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:13.574 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:13.574 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2505484 00:05:13.574 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:13.574 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:13.574 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:13.574 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:13.574 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:13.574 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2505484 00:05:13.574 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:13.574 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:13.574 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:13.574 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2505484 00:05:13.574 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:13.574 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2505484 00:05:13.574 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:13.574 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:13.574 21:52:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:13.574 21:52:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2505484 00:05:13.574 21:52:52 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 2505484 ']' 00:05:13.574 21:52:52 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 2505484 00:05:13.574 21:52:52 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:13.574 21:52:52 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:13.574 21:52:52 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2505484 00:05:13.574 21:52:52 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:13.574 21:52:52 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:13.574 21:52:52 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2505484' 00:05:13.574 killing process with pid 2505484 00:05:13.574 21:52:52 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 2505484 00:05:13.574 21:52:52 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 2505484 00:05:13.833 00:05:13.833 real 0m1.422s 00:05:13.833 user 0m1.452s 00:05:13.833 sys 0m0.454s 00:05:13.833 21:52:52 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.833 21:52:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:13.833 ************************************ 00:05:13.833 END TEST dpdk_mem_utility 00:05:13.833 ************************************ 00:05:13.833 21:52:52 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:13.833 21:52:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:13.833 21:52:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.833 21:52:52 -- common/autotest_common.sh@10 -- # set +x 00:05:13.833 ************************************ 00:05:13.833 START TEST event 00:05:13.833 ************************************ 00:05:13.833 21:52:52 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:14.091 * Looking for test storage... 00:05:14.092 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:14.092 21:52:53 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:14.092 21:52:53 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:14.092 21:52:53 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:14.092 21:52:53 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:14.092 21:52:53 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.092 21:52:53 event -- common/autotest_common.sh@10 -- # set +x 00:05:14.092 ************************************ 00:05:14.092 START TEST event_perf 00:05:14.092 ************************************ 00:05:14.092 21:52:53 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:14.092 Running I/O for 1 seconds...[2024-07-24 21:52:53.118479] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:05:14.092 [2024-07-24 21:52:53.118566] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2505810 ] 00:05:14.092 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.092 [2024-07-24 21:52:53.191832] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:14.092 [2024-07-24 21:52:53.263128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.092 [2024-07-24 21:52:53.263222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:14.092 [2024-07-24 21:52:53.263310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:14.092 [2024-07-24 21:52:53.263312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.490 Running I/O for 1 seconds... 00:05:15.490 lcore 0: 213102 00:05:15.490 lcore 1: 213103 00:05:15.490 lcore 2: 213103 00:05:15.490 lcore 3: 213101 00:05:15.490 done. 00:05:15.490 00:05:15.490 real 0m1.235s 00:05:15.490 user 0m4.142s 00:05:15.490 sys 0m0.090s 00:05:15.490 21:52:54 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:15.490 21:52:54 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:15.490 ************************************ 00:05:15.490 END TEST event_perf 00:05:15.490 ************************************ 00:05:15.490 21:52:54 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:15.490 21:52:54 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:15.490 21:52:54 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.490 21:52:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:15.490 ************************************ 00:05:15.490 START TEST event_reactor 00:05:15.490 ************************************ 00:05:15.490 21:52:54 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:15.490 [2024-07-24 21:52:54.426057] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:05:15.490 [2024-07-24 21:52:54.426143] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2506091 ] 00:05:15.490 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.490 [2024-07-24 21:52:54.496204] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.490 [2024-07-24 21:52:54.563427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.425 test_start 00:05:16.425 oneshot 00:05:16.425 tick 100 00:05:16.425 tick 100 00:05:16.425 tick 250 00:05:16.425 tick 100 00:05:16.425 tick 100 00:05:16.425 tick 250 00:05:16.425 tick 500 00:05:16.425 tick 100 00:05:16.425 tick 100 00:05:16.425 tick 100 00:05:16.425 tick 250 00:05:16.425 tick 100 00:05:16.425 tick 100 00:05:16.425 test_end 00:05:16.425 00:05:16.425 real 0m1.223s 00:05:16.425 user 0m1.135s 00:05:16.425 sys 0m0.085s 00:05:16.425 21:52:55 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.425 21:52:55 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:16.425 ************************************ 00:05:16.425 END TEST event_reactor 00:05:16.425 ************************************ 00:05:16.683 21:52:55 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:16.683 21:52:55 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:16.683 21:52:55 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.683 21:52:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:16.683 ************************************ 00:05:16.684 START TEST event_reactor_perf 00:05:16.684 ************************************ 00:05:16.684 21:52:55 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:16.684 [2024-07-24 21:52:55.713531] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:05:16.684 [2024-07-24 21:52:55.713580] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2506312 ] 00:05:16.684 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.684 [2024-07-24 21:52:55.783180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.684 [2024-07-24 21:52:55.852209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.060 test_start 00:05:18.060 test_end 00:05:18.060 Performance: 531943 events per second 00:05:18.060 00:05:18.060 real 0m1.214s 00:05:18.060 user 0m1.129s 00:05:18.060 sys 0m0.082s 00:05:18.060 21:52:56 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.060 21:52:56 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:18.060 ************************************ 00:05:18.060 END TEST event_reactor_perf 00:05:18.060 ************************************ 00:05:18.060 21:52:56 event -- event/event.sh@49 -- # uname -s 00:05:18.060 21:52:56 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:18.060 21:52:56 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:18.060 21:52:56 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.060 21:52:56 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.060 21:52:56 event -- common/autotest_common.sh@10 -- # set +x 00:05:18.060 ************************************ 00:05:18.060 START TEST event_scheduler 00:05:18.060 ************************************ 00:05:18.060 21:52:56 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:18.060 * Looking for test storage... 00:05:18.060 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:18.060 21:52:57 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:18.060 21:52:57 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2506554 00:05:18.060 21:52:57 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:18.060 21:52:57 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:18.060 21:52:57 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2506554 00:05:18.060 21:52:57 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 2506554 ']' 00:05:18.060 21:52:57 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.060 21:52:57 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:18.060 21:52:57 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.060 21:52:57 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:18.060 21:52:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:18.060 [2024-07-24 21:52:57.138219] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:05:18.060 [2024-07-24 21:52:57.138279] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2506554 ] 00:05:18.060 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.060 [2024-07-24 21:52:57.205209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:18.320 [2024-07-24 21:52:57.283413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.320 [2024-07-24 21:52:57.283498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.320 [2024-07-24 21:52:57.283580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:18.320 [2024-07-24 21:52:57.283582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:18.887 21:52:57 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:18.887 21:52:57 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:18.887 21:52:57 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:18.887 21:52:57 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.887 21:52:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:18.887 [2024-07-24 21:52:57.953959] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:18.887 [2024-07-24 21:52:57.953980] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:18.887 [2024-07-24 21:52:57.953991] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:18.887 [2024-07-24 21:52:57.953999] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:18.887 [2024-07-24 21:52:57.954006] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:18.887 21:52:57 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.887 21:52:57 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:18.887 21:52:57 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.887 21:52:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:18.887 [2024-07-24 21:52:58.025650] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:18.887 21:52:58 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.887 21:52:58 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:18.887 21:52:58 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.887 21:52:58 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.887 21:52:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:18.887 ************************************ 00:05:18.887 START TEST scheduler_create_thread 00:05:18.887 ************************************ 00:05:18.887 21:52:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:18.887 21:52:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:18.887 21:52:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.887 21:52:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.887 2 00:05:18.887 21:52:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.888 21:52:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:18.888 21:52:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.888 21:52:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.888 3 00:05:18.888 21:52:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.888 21:52:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:18.888 21:52:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.888 21:52:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.888 4 00:05:18.888 21:52:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.888 21:52:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:18.888 21:52:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.888 21:52:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.147 5 00:05:19.147 21:52:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.147 21:52:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:19.147 21:52:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.147 21:52:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.147 6 00:05:19.147 21:52:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.147 21:52:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:19.147 21:52:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.147 21:52:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.147 7 00:05:19.147 21:52:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.147 21:52:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:19.147 21:52:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.147 21:52:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.147 8 00:05:19.147 21:52:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.147 21:52:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:19.147 21:52:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.147 21:52:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.147 9 00:05:19.147 21:52:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.147 21:52:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:19.147 21:52:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.147 21:52:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.147 10 00:05:19.147 21:52:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.147 21:52:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:19.147 21:52:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.147 21:52:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.147 21:52:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.147 21:52:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:19.147 21:52:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:19.147 21:52:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.147 21:52:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.147 21:52:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.147 21:52:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:19.147 21:52:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.147 21:52:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.529 21:52:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.529 21:52:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:20.529 21:52:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:20.529 21:52:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.529 21:52:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.537 21:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.537 00:05:21.537 real 0m2.619s 00:05:21.537 user 0m0.024s 00:05:21.537 sys 0m0.007s 00:05:21.538 21:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:21.538 21:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.538 ************************************ 00:05:21.538 END TEST scheduler_create_thread 00:05:21.538 ************************************ 00:05:21.538 21:53:00 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:21.538 21:53:00 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2506554 00:05:21.538 21:53:00 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 2506554 ']' 00:05:21.538 21:53:00 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 2506554 00:05:21.538 21:53:00 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:21.538 21:53:00 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:21.538 21:53:00 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2506554 00:05:21.797 21:53:00 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:21.797 21:53:00 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:21.797 21:53:00 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2506554' 00:05:21.797 killing process with pid 2506554 00:05:21.797 21:53:00 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 2506554 00:05:21.797 21:53:00 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 2506554 00:05:22.056 [2024-07-24 21:53:01.167866] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:22.315 00:05:22.315 real 0m4.377s 00:05:22.315 user 0m8.197s 00:05:22.315 sys 0m0.438s 00:05:22.315 21:53:01 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:22.315 21:53:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:22.315 ************************************ 00:05:22.315 END TEST event_scheduler 00:05:22.315 ************************************ 00:05:22.315 21:53:01 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:22.315 21:53:01 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:22.315 21:53:01 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:22.315 21:53:01 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:22.315 21:53:01 event -- common/autotest_common.sh@10 -- # set +x 00:05:22.315 ************************************ 00:05:22.315 START TEST app_repeat 00:05:22.315 ************************************ 00:05:22.315 21:53:01 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:22.315 21:53:01 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.315 21:53:01 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.315 21:53:01 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:22.315 21:53:01 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.315 21:53:01 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:22.315 21:53:01 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:22.315 21:53:01 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:22.315 21:53:01 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2507281 00:05:22.315 21:53:01 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:22.315 21:53:01 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:22.315 21:53:01 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2507281' 00:05:22.315 Process app_repeat pid: 2507281 00:05:22.315 21:53:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:22.315 21:53:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:22.315 spdk_app_start Round 0 00:05:22.315 21:53:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2507281 /var/tmp/spdk-nbd.sock 00:05:22.315 21:53:01 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2507281 ']' 00:05:22.315 21:53:01 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:22.315 21:53:01 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:22.315 21:53:01 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:22.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:22.315 21:53:01 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:22.315 21:53:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:22.315 [2024-07-24 21:53:01.486302] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:05:22.315 [2024-07-24 21:53:01.486363] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2507281 ] 00:05:22.315 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.575 [2024-07-24 21:53:01.558443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:22.575 [2024-07-24 21:53:01.629430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.575 [2024-07-24 21:53:01.629432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.144 21:53:02 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:23.144 21:53:02 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:23.144 21:53:02 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.403 Malloc0 00:05:23.403 21:53:02 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.663 Malloc1 00:05:23.663 21:53:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.663 21:53:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.663 21:53:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.663 21:53:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:23.663 21:53:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.663 21:53:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:23.663 21:53:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.663 21:53:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.663 21:53:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.663 21:53:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:23.663 21:53:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.663 21:53:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:23.663 21:53:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:23.663 21:53:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:23.663 21:53:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.663 21:53:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:23.663 /dev/nbd0 00:05:23.663 21:53:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:23.663 21:53:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:23.663 21:53:02 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:23.663 21:53:02 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:23.663 21:53:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:23.663 21:53:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:23.663 21:53:02 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:23.663 21:53:02 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:23.663 21:53:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:23.663 21:53:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:23.663 21:53:02 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.663 1+0 records in 00:05:23.663 1+0 records out 00:05:23.663 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227304 s, 18.0 MB/s 00:05:23.663 21:53:02 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:23.663 21:53:02 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:23.663 21:53:02 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:23.663 21:53:02 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:23.663 21:53:02 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:23.663 21:53:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.663 21:53:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.663 21:53:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:23.923 /dev/nbd1 00:05:23.923 21:53:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:23.923 21:53:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:23.923 21:53:03 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:23.923 21:53:03 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:23.923 21:53:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:23.923 21:53:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:23.923 21:53:03 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:23.923 21:53:03 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:23.923 21:53:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:23.923 21:53:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:23.923 21:53:03 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.923 1+0 records in 00:05:23.923 1+0 records out 00:05:23.923 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000221767 s, 18.5 MB/s 00:05:23.923 21:53:03 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:23.923 21:53:03 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:23.923 21:53:03 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:23.923 21:53:03 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:23.923 21:53:03 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:23.923 21:53:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.923 21:53:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.923 21:53:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.923 21:53:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.923 21:53:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.183 21:53:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:24.183 { 00:05:24.183 "nbd_device": "/dev/nbd0", 00:05:24.183 "bdev_name": "Malloc0" 00:05:24.183 }, 00:05:24.183 { 00:05:24.183 "nbd_device": "/dev/nbd1", 00:05:24.183 "bdev_name": "Malloc1" 00:05:24.183 } 00:05:24.183 ]' 00:05:24.183 21:53:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:24.183 { 00:05:24.183 "nbd_device": "/dev/nbd0", 00:05:24.183 "bdev_name": "Malloc0" 00:05:24.183 }, 00:05:24.183 { 00:05:24.183 "nbd_device": "/dev/nbd1", 00:05:24.183 "bdev_name": "Malloc1" 00:05:24.183 } 00:05:24.183 ]' 00:05:24.183 21:53:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.183 21:53:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:24.183 /dev/nbd1' 00:05:24.183 21:53:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:24.183 /dev/nbd1' 00:05:24.183 21:53:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.183 21:53:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:24.183 21:53:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:24.183 21:53:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:24.183 21:53:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:24.183 21:53:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:24.183 21:53:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.183 21:53:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.183 21:53:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:24.183 21:53:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:24.183 21:53:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:24.183 21:53:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:24.183 256+0 records in 00:05:24.183 256+0 records out 00:05:24.183 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113995 s, 92.0 MB/s 00:05:24.183 21:53:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.183 21:53:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:24.183 256+0 records in 00:05:24.183 256+0 records out 00:05:24.183 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0153728 s, 68.2 MB/s 00:05:24.183 21:53:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.183 21:53:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:24.183 256+0 records in 00:05:24.183 256+0 records out 00:05:24.183 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208633 s, 50.3 MB/s 00:05:24.183 21:53:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:24.183 21:53:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.183 21:53:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.183 21:53:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:24.183 21:53:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:24.183 21:53:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:24.183 21:53:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:24.183 21:53:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.183 21:53:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:24.183 21:53:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.183 21:53:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:24.183 21:53:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:24.183 21:53:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:24.183 21:53:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.183 21:53:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.183 21:53:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:24.183 21:53:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:24.183 21:53:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.183 21:53:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:24.442 21:53:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:24.442 21:53:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:24.442 21:53:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:24.442 21:53:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.442 21:53:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.442 21:53:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:24.442 21:53:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:24.442 21:53:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.442 21:53:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.442 21:53:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:24.701 21:53:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:24.701 21:53:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:24.701 21:53:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:24.701 21:53:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.701 21:53:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.701 21:53:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:24.701 21:53:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:24.701 21:53:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.701 21:53:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:24.701 21:53:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.701 21:53:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.960 21:53:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:24.960 21:53:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:24.960 21:53:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.960 21:53:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:24.960 21:53:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:24.960 21:53:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.960 21:53:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:24.960 21:53:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:24.960 21:53:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:24.960 21:53:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:24.960 21:53:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:24.960 21:53:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:24.960 21:53:03 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:24.960 21:53:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:25.219 [2024-07-24 21:53:04.344840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:25.219 [2024-07-24 21:53:04.407079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.219 [2024-07-24 21:53:04.407081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.478 [2024-07-24 21:53:04.446919] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:25.478 [2024-07-24 21:53:04.446964] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:28.014 21:53:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:28.014 21:53:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:28.014 spdk_app_start Round 1 00:05:28.014 21:53:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2507281 /var/tmp/spdk-nbd.sock 00:05:28.014 21:53:07 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2507281 ']' 00:05:28.014 21:53:07 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:28.014 21:53:07 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:28.014 21:53:07 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:28.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:28.014 21:53:07 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:28.014 21:53:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:28.273 21:53:07 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:28.273 21:53:07 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:28.273 21:53:07 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:28.533 Malloc0 00:05:28.533 21:53:07 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:28.533 Malloc1 00:05:28.533 21:53:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:28.533 21:53:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.533 21:53:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:28.533 21:53:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:28.533 21:53:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.533 21:53:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:28.533 21:53:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:28.533 21:53:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.533 21:53:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:28.533 21:53:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:28.533 21:53:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.533 21:53:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:28.533 21:53:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:28.533 21:53:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:28.533 21:53:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.533 21:53:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:28.792 /dev/nbd0 00:05:28.792 21:53:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:28.792 21:53:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:28.792 21:53:07 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:28.792 21:53:07 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:28.792 21:53:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:28.792 21:53:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:28.792 21:53:07 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:28.792 21:53:07 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:28.792 21:53:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:28.792 21:53:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:28.792 21:53:07 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:28.792 1+0 records in 00:05:28.792 1+0 records out 00:05:28.792 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000134886 s, 30.4 MB/s 00:05:28.792 21:53:07 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:28.792 21:53:07 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:28.792 21:53:07 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:28.792 21:53:07 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:28.792 21:53:07 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:28.792 21:53:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:28.792 21:53:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.792 21:53:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:29.051 /dev/nbd1 00:05:29.051 21:53:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:29.051 21:53:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:29.051 21:53:08 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:29.051 21:53:08 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:29.051 21:53:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:29.051 21:53:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:29.051 21:53:08 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:29.051 21:53:08 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:29.051 21:53:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:29.051 21:53:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:29.051 21:53:08 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:29.051 1+0 records in 00:05:29.051 1+0 records out 00:05:29.051 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261121 s, 15.7 MB/s 00:05:29.051 21:53:08 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:29.051 21:53:08 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:29.051 21:53:08 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:29.051 21:53:08 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:29.051 21:53:08 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:29.051 21:53:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:29.051 21:53:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.051 21:53:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:29.051 21:53:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.051 21:53:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:29.311 21:53:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:29.311 { 00:05:29.311 "nbd_device": "/dev/nbd0", 00:05:29.311 "bdev_name": "Malloc0" 00:05:29.311 }, 00:05:29.311 { 00:05:29.311 "nbd_device": "/dev/nbd1", 00:05:29.311 "bdev_name": "Malloc1" 00:05:29.311 } 00:05:29.311 ]' 00:05:29.311 21:53:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:29.311 { 00:05:29.311 "nbd_device": "/dev/nbd0", 00:05:29.311 "bdev_name": "Malloc0" 00:05:29.311 }, 00:05:29.311 { 00:05:29.311 "nbd_device": "/dev/nbd1", 00:05:29.311 "bdev_name": "Malloc1" 00:05:29.311 } 00:05:29.311 ]' 00:05:29.311 21:53:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:29.311 21:53:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:29.311 /dev/nbd1' 00:05:29.311 21:53:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:29.311 /dev/nbd1' 00:05:29.311 21:53:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:29.311 21:53:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:29.311 21:53:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:29.311 21:53:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:29.311 21:53:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:29.311 21:53:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:29.311 21:53:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.311 21:53:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:29.311 21:53:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:29.311 21:53:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:29.311 21:53:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:29.311 21:53:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:29.311 256+0 records in 00:05:29.311 256+0 records out 00:05:29.311 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113882 s, 92.1 MB/s 00:05:29.311 21:53:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:29.311 21:53:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:29.311 256+0 records in 00:05:29.311 256+0 records out 00:05:29.311 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0197335 s, 53.1 MB/s 00:05:29.311 21:53:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:29.311 21:53:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:29.311 256+0 records in 00:05:29.311 256+0 records out 00:05:29.311 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204684 s, 51.2 MB/s 00:05:29.311 21:53:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:29.311 21:53:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.311 21:53:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:29.311 21:53:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:29.311 21:53:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:29.311 21:53:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:29.311 21:53:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:29.311 21:53:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:29.311 21:53:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:29.311 21:53:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:29.311 21:53:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:29.311 21:53:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:29.311 21:53:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:29.311 21:53:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.311 21:53:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.311 21:53:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:29.311 21:53:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:29.311 21:53:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:29.311 21:53:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:29.570 21:53:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:29.570 21:53:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:29.570 21:53:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:29.570 21:53:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:29.570 21:53:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:29.570 21:53:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:29.570 21:53:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:29.570 21:53:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:29.570 21:53:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:29.570 21:53:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:29.829 21:53:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:29.829 21:53:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:29.829 21:53:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:29.829 21:53:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:29.829 21:53:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:29.829 21:53:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:29.829 21:53:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:29.829 21:53:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:29.829 21:53:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:29.829 21:53:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.830 21:53:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:29.830 21:53:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:29.830 21:53:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:29.830 21:53:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:29.830 21:53:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:29.830 21:53:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:29.830 21:53:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:29.830 21:53:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:29.830 21:53:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:29.830 21:53:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:29.830 21:53:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:29.830 21:53:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:29.830 21:53:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:29.830 21:53:09 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:30.088 21:53:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:30.348 [2024-07-24 21:53:09.394410] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:30.348 [2024-07-24 21:53:09.462935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.348 [2024-07-24 21:53:09.462938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.348 [2024-07-24 21:53:09.504875] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:30.348 [2024-07-24 21:53:09.504916] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:33.658 21:53:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:33.658 21:53:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:33.658 spdk_app_start Round 2 00:05:33.658 21:53:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2507281 /var/tmp/spdk-nbd.sock 00:05:33.658 21:53:12 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2507281 ']' 00:05:33.658 21:53:12 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:33.658 21:53:12 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:33.658 21:53:12 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:33.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:33.658 21:53:12 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:33.658 21:53:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:33.658 21:53:12 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:33.658 21:53:12 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:33.658 21:53:12 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:33.658 Malloc0 00:05:33.658 21:53:12 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:33.658 Malloc1 00:05:33.658 21:53:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:33.658 21:53:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.658 21:53:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.658 21:53:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:33.658 21:53:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.658 21:53:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:33.658 21:53:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:33.658 21:53:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.658 21:53:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.658 21:53:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:33.658 21:53:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.658 21:53:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:33.658 21:53:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:33.658 21:53:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:33.658 21:53:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.658 21:53:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:33.917 /dev/nbd0 00:05:33.917 21:53:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:33.917 21:53:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:33.917 21:53:12 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:33.917 21:53:12 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:33.917 21:53:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:33.917 21:53:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:33.917 21:53:12 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:33.917 21:53:12 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:33.917 21:53:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:33.917 21:53:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:33.917 21:53:12 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:33.917 1+0 records in 00:05:33.917 1+0 records out 00:05:33.917 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026222 s, 15.6 MB/s 00:05:33.918 21:53:12 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:33.918 21:53:12 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:33.918 21:53:12 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:33.918 21:53:12 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:33.918 21:53:12 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:33.918 21:53:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:33.918 21:53:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.918 21:53:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:34.177 /dev/nbd1 00:05:34.177 21:53:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:34.177 21:53:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:34.177 21:53:13 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:34.177 21:53:13 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:34.177 21:53:13 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:34.177 21:53:13 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:34.177 21:53:13 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:34.177 21:53:13 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:34.177 21:53:13 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:34.177 21:53:13 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:34.177 21:53:13 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:34.177 1+0 records in 00:05:34.177 1+0 records out 00:05:34.177 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262104 s, 15.6 MB/s 00:05:34.177 21:53:13 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:34.177 21:53:13 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:34.177 21:53:13 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:34.177 21:53:13 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:34.177 21:53:13 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:34.177 21:53:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:34.177 21:53:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:34.177 21:53:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:34.177 21:53:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.177 21:53:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:34.177 21:53:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:34.177 { 00:05:34.177 "nbd_device": "/dev/nbd0", 00:05:34.177 "bdev_name": "Malloc0" 00:05:34.177 }, 00:05:34.177 { 00:05:34.177 "nbd_device": "/dev/nbd1", 00:05:34.177 "bdev_name": "Malloc1" 00:05:34.177 } 00:05:34.177 ]' 00:05:34.177 21:53:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:34.177 { 00:05:34.177 "nbd_device": "/dev/nbd0", 00:05:34.177 "bdev_name": "Malloc0" 00:05:34.177 }, 00:05:34.177 { 00:05:34.177 "nbd_device": "/dev/nbd1", 00:05:34.177 "bdev_name": "Malloc1" 00:05:34.177 } 00:05:34.177 ]' 00:05:34.177 21:53:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:34.436 21:53:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:34.436 /dev/nbd1' 00:05:34.436 21:53:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:34.436 /dev/nbd1' 00:05:34.436 21:53:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:34.436 21:53:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:34.436 21:53:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:34.436 21:53:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:34.436 21:53:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:34.436 21:53:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:34.436 21:53:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.436 21:53:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:34.436 21:53:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:34.436 21:53:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:34.436 21:53:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:34.436 21:53:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:34.436 256+0 records in 00:05:34.436 256+0 records out 00:05:34.436 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105071 s, 99.8 MB/s 00:05:34.436 21:53:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.436 21:53:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:34.436 256+0 records in 00:05:34.436 256+0 records out 00:05:34.436 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0197473 s, 53.1 MB/s 00:05:34.436 21:53:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.436 21:53:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:34.436 256+0 records in 00:05:34.436 256+0 records out 00:05:34.436 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201667 s, 52.0 MB/s 00:05:34.436 21:53:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:34.436 21:53:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.436 21:53:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:34.436 21:53:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:34.436 21:53:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:34.436 21:53:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:34.436 21:53:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:34.436 21:53:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:34.436 21:53:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:34.437 21:53:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:34.437 21:53:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:34.437 21:53:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:34.437 21:53:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:34.437 21:53:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.437 21:53:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.437 21:53:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:34.437 21:53:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:34.437 21:53:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:34.437 21:53:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:34.695 21:53:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:34.695 21:53:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:34.695 21:53:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:34.695 21:53:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:34.695 21:53:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:34.695 21:53:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:34.695 21:53:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:34.695 21:53:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:34.695 21:53:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:34.695 21:53:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:34.695 21:53:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:34.695 21:53:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:34.695 21:53:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:34.695 21:53:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:34.695 21:53:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:34.695 21:53:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:34.695 21:53:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:34.695 21:53:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:34.695 21:53:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:34.695 21:53:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.695 21:53:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:34.954 21:53:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:34.954 21:53:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:34.954 21:53:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:34.954 21:53:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:34.954 21:53:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:34.954 21:53:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:34.954 21:53:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:34.954 21:53:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:34.954 21:53:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:34.954 21:53:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:34.954 21:53:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:34.954 21:53:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:34.954 21:53:14 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:35.213 21:53:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:35.472 [2024-07-24 21:53:14.489456] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:35.472 [2024-07-24 21:53:14.551693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.472 [2024-07-24 21:53:14.551695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.472 [2024-07-24 21:53:14.592282] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:35.472 [2024-07-24 21:53:14.592326] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:38.818 21:53:17 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2507281 /var/tmp/spdk-nbd.sock 00:05:38.818 21:53:17 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2507281 ']' 00:05:38.818 21:53:17 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:38.818 21:53:17 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:38.818 21:53:17 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:38.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:38.818 21:53:17 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:38.818 21:53:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:38.818 21:53:17 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:38.818 21:53:17 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:38.818 21:53:17 event.app_repeat -- event/event.sh@39 -- # killprocess 2507281 00:05:38.818 21:53:17 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 2507281 ']' 00:05:38.818 21:53:17 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 2507281 00:05:38.818 21:53:17 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:38.818 21:53:17 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:38.818 21:53:17 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2507281 00:05:38.818 21:53:17 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:38.818 21:53:17 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:38.818 21:53:17 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2507281' 00:05:38.818 killing process with pid 2507281 00:05:38.818 21:53:17 event.app_repeat -- common/autotest_common.sh@969 -- # kill 2507281 00:05:38.818 21:53:17 event.app_repeat -- common/autotest_common.sh@974 -- # wait 2507281 00:05:38.818 spdk_app_start is called in Round 0. 00:05:38.818 Shutdown signal received, stop current app iteration 00:05:38.818 Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 reinitialization... 00:05:38.818 spdk_app_start is called in Round 1. 00:05:38.818 Shutdown signal received, stop current app iteration 00:05:38.818 Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 reinitialization... 00:05:38.818 spdk_app_start is called in Round 2. 00:05:38.818 Shutdown signal received, stop current app iteration 00:05:38.819 Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 reinitialization... 00:05:38.819 spdk_app_start is called in Round 3. 00:05:38.819 Shutdown signal received, stop current app iteration 00:05:38.819 21:53:17 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:38.819 21:53:17 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:38.819 00:05:38.819 real 0m16.255s 00:05:38.819 user 0m34.588s 00:05:38.819 sys 0m2.949s 00:05:38.819 21:53:17 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.819 21:53:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:38.819 ************************************ 00:05:38.819 END TEST app_repeat 00:05:38.819 ************************************ 00:05:38.819 21:53:17 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:38.819 21:53:17 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:38.819 21:53:17 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:38.819 21:53:17 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:38.819 21:53:17 event -- common/autotest_common.sh@10 -- # set +x 00:05:38.819 ************************************ 00:05:38.819 START TEST cpu_locks 00:05:38.819 ************************************ 00:05:38.819 21:53:17 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:38.819 * Looking for test storage... 00:05:38.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:38.819 21:53:17 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:38.819 21:53:17 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:38.819 21:53:17 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:38.819 21:53:17 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:38.819 21:53:17 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:38.819 21:53:17 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:38.819 21:53:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.819 ************************************ 00:05:38.819 START TEST default_locks 00:05:38.819 ************************************ 00:05:38.819 21:53:17 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:38.819 21:53:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:38.819 21:53:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2510406 00:05:38.819 21:53:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2510406 00:05:38.819 21:53:17 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2510406 ']' 00:05:38.819 21:53:17 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.819 21:53:17 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:38.819 21:53:17 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.819 21:53:17 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:38.819 21:53:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.819 [2024-07-24 21:53:17.972367] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:05:38.819 [2024-07-24 21:53:17.972410] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2510406 ] 00:05:38.819 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.079 [2024-07-24 21:53:18.041491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.079 [2024-07-24 21:53:18.116580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.647 21:53:18 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:39.647 21:53:18 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:39.647 21:53:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2510406 00:05:39.647 21:53:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2510406 00:05:39.647 21:53:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:40.216 lslocks: write error 00:05:40.216 21:53:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2510406 00:05:40.216 21:53:19 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 2510406 ']' 00:05:40.216 21:53:19 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 2510406 00:05:40.216 21:53:19 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:40.216 21:53:19 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:40.216 21:53:19 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2510406 00:05:40.216 21:53:19 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:40.216 21:53:19 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:40.216 21:53:19 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2510406' 00:05:40.216 killing process with pid 2510406 00:05:40.216 21:53:19 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 2510406 00:05:40.216 21:53:19 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 2510406 00:05:40.476 21:53:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2510406 00:05:40.476 21:53:19 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:40.476 21:53:19 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2510406 00:05:40.476 21:53:19 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:40.476 21:53:19 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:40.476 21:53:19 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:40.476 21:53:19 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:40.476 21:53:19 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 2510406 00:05:40.476 21:53:19 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2510406 ']' 00:05:40.476 21:53:19 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.476 21:53:19 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:40.476 21:53:19 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.476 21:53:19 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:40.476 21:53:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2510406) - No such process 00:05:40.476 ERROR: process (pid: 2510406) is no longer running 00:05:40.476 21:53:19 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:40.476 21:53:19 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:40.476 21:53:19 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:40.476 21:53:19 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:40.476 21:53:19 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:40.476 21:53:19 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:40.476 21:53:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:40.476 21:53:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:40.476 21:53:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:40.476 21:53:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:40.476 00:05:40.476 real 0m1.739s 00:05:40.476 user 0m1.796s 00:05:40.476 sys 0m0.632s 00:05:40.476 21:53:19 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:40.476 21:53:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.476 ************************************ 00:05:40.476 END TEST default_locks 00:05:40.476 ************************************ 00:05:40.735 21:53:19 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:40.735 21:53:19 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:40.735 21:53:19 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.736 21:53:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.736 ************************************ 00:05:40.736 START TEST default_locks_via_rpc 00:05:40.736 ************************************ 00:05:40.736 21:53:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:40.736 21:53:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:40.736 21:53:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2510712 00:05:40.736 21:53:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2510712 00:05:40.736 21:53:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2510712 ']' 00:05:40.736 21:53:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.736 21:53:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:40.736 21:53:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.736 21:53:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:40.736 21:53:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.736 [2024-07-24 21:53:19.779596] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:05:40.736 [2024-07-24 21:53:19.779640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2510712 ] 00:05:40.736 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.736 [2024-07-24 21:53:19.848557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.736 [2024-07-24 21:53:19.923710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.674 21:53:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:41.674 21:53:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:41.674 21:53:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:41.674 21:53:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.674 21:53:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.674 21:53:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.674 21:53:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:41.674 21:53:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:41.674 21:53:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:41.674 21:53:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:41.674 21:53:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:41.674 21:53:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.674 21:53:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.674 21:53:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.674 21:53:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2510712 00:05:41.674 21:53:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2510712 00:05:41.674 21:53:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:41.933 21:53:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2510712 00:05:41.933 21:53:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 2510712 ']' 00:05:41.933 21:53:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 2510712 00:05:41.933 21:53:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:41.933 21:53:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:41.933 21:53:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2510712 00:05:41.933 21:53:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:41.933 21:53:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:41.933 21:53:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2510712' 00:05:41.933 killing process with pid 2510712 00:05:41.933 21:53:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 2510712 00:05:41.933 21:53:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 2510712 00:05:42.192 00:05:42.192 real 0m1.641s 00:05:42.192 user 0m1.715s 00:05:42.192 sys 0m0.556s 00:05:42.192 21:53:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:42.192 21:53:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.192 ************************************ 00:05:42.192 END TEST default_locks_via_rpc 00:05:42.192 ************************************ 00:05:42.452 21:53:21 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:42.452 21:53:21 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:42.452 21:53:21 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.452 21:53:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.452 ************************************ 00:05:42.452 START TEST non_locking_app_on_locked_coremask 00:05:42.452 ************************************ 00:05:42.452 21:53:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:42.452 21:53:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2511006 00:05:42.452 21:53:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2511006 /var/tmp/spdk.sock 00:05:42.452 21:53:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:42.452 21:53:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2511006 ']' 00:05:42.452 21:53:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.452 21:53:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:42.452 21:53:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.452 21:53:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:42.452 21:53:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.452 [2024-07-24 21:53:21.513989] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:05:42.452 [2024-07-24 21:53:21.514034] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2511006 ] 00:05:42.452 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.452 [2024-07-24 21:53:21.583292] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.452 [2024-07-24 21:53:21.657158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.389 21:53:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:43.389 21:53:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:43.389 21:53:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:43.389 21:53:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2511270 00:05:43.389 21:53:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2511270 /var/tmp/spdk2.sock 00:05:43.389 21:53:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2511270 ']' 00:05:43.389 21:53:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:43.389 21:53:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:43.389 21:53:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:43.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:43.389 21:53:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:43.389 21:53:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.389 [2024-07-24 21:53:22.336051] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:05:43.389 [2024-07-24 21:53:22.336103] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2511270 ] 00:05:43.389 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.389 [2024-07-24 21:53:22.430146] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:43.389 [2024-07-24 21:53:22.430171] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.389 [2024-07-24 21:53:22.579013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.957 21:53:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:43.957 21:53:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:43.957 21:53:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2511006 00:05:43.957 21:53:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2511006 00:05:43.957 21:53:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:45.334 lslocks: write error 00:05:45.334 21:53:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2511006 00:05:45.334 21:53:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2511006 ']' 00:05:45.334 21:53:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2511006 00:05:45.334 21:53:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:45.334 21:53:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:45.334 21:53:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2511006 00:05:45.334 21:53:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:45.334 21:53:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:45.334 21:53:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2511006' 00:05:45.334 killing process with pid 2511006 00:05:45.334 21:53:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2511006 00:05:45.334 21:53:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2511006 00:05:45.902 21:53:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2511270 00:05:45.902 21:53:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2511270 ']' 00:05:45.902 21:53:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2511270 00:05:45.902 21:53:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:45.902 21:53:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:45.902 21:53:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2511270 00:05:45.902 21:53:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:45.902 21:53:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:45.902 21:53:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2511270' 00:05:45.902 killing process with pid 2511270 00:05:45.902 21:53:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2511270 00:05:45.902 21:53:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2511270 00:05:46.160 00:05:46.160 real 0m3.718s 00:05:46.160 user 0m3.957s 00:05:46.160 sys 0m1.214s 00:05:46.160 21:53:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.160 21:53:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.160 ************************************ 00:05:46.160 END TEST non_locking_app_on_locked_coremask 00:05:46.160 ************************************ 00:05:46.160 21:53:25 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:46.160 21:53:25 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.160 21:53:25 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.160 21:53:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.160 ************************************ 00:05:46.160 START TEST locking_app_on_unlocked_coremask 00:05:46.160 ************************************ 00:05:46.160 21:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:46.160 21:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2511825 00:05:46.160 21:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2511825 /var/tmp/spdk.sock 00:05:46.160 21:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:46.160 21:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2511825 ']' 00:05:46.160 21:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.160 21:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:46.160 21:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.161 21:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:46.161 21:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.161 [2024-07-24 21:53:25.314389] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:05:46.161 [2024-07-24 21:53:25.314440] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2511825 ] 00:05:46.161 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.419 [2024-07-24 21:53:25.384735] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:46.419 [2024-07-24 21:53:25.384760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.419 [2024-07-24 21:53:25.450905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.004 21:53:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:47.004 21:53:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:47.004 21:53:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:47.004 21:53:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2511841 00:05:47.004 21:53:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2511841 /var/tmp/spdk2.sock 00:05:47.004 21:53:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2511841 ']' 00:05:47.004 21:53:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:47.004 21:53:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:47.004 21:53:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:47.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:47.004 21:53:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:47.004 21:53:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.004 [2024-07-24 21:53:26.136905] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:05:47.004 [2024-07-24 21:53:26.136957] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2511841 ] 00:05:47.004 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.263 [2024-07-24 21:53:26.237952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.263 [2024-07-24 21:53:26.381574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.831 21:53:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:47.831 21:53:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:47.831 21:53:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2511841 00:05:47.831 21:53:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2511841 00:05:47.831 21:53:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:48.768 lslocks: write error 00:05:48.768 21:53:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2511825 00:05:48.768 21:53:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2511825 ']' 00:05:48.768 21:53:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2511825 00:05:48.768 21:53:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:48.768 21:53:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:48.768 21:53:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2511825 00:05:48.768 21:53:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:48.768 21:53:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:48.768 21:53:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2511825' 00:05:48.768 killing process with pid 2511825 00:05:48.768 21:53:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2511825 00:05:48.768 21:53:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2511825 00:05:49.336 21:53:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2511841 00:05:49.336 21:53:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2511841 ']' 00:05:49.336 21:53:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2511841 00:05:49.336 21:53:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:49.336 21:53:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:49.336 21:53:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2511841 00:05:49.595 21:53:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:49.595 21:53:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:49.595 21:53:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2511841' 00:05:49.595 killing process with pid 2511841 00:05:49.595 21:53:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2511841 00:05:49.595 21:53:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2511841 00:05:49.854 00:05:49.854 real 0m3.616s 00:05:49.854 user 0m3.865s 00:05:49.854 sys 0m1.194s 00:05:49.854 21:53:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.854 21:53:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.854 ************************************ 00:05:49.854 END TEST locking_app_on_unlocked_coremask 00:05:49.854 ************************************ 00:05:49.854 21:53:28 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:49.854 21:53:28 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.854 21:53:28 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.854 21:53:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.854 ************************************ 00:05:49.854 START TEST locking_app_on_locked_coremask 00:05:49.854 ************************************ 00:05:49.854 21:53:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:05:49.854 21:53:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2512402 00:05:49.854 21:53:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2512402 /var/tmp/spdk.sock 00:05:49.854 21:53:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:49.854 21:53:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2512402 ']' 00:05:49.854 21:53:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.854 21:53:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:49.854 21:53:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.854 21:53:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:49.854 21:53:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.854 [2024-07-24 21:53:29.019743] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:05:49.854 [2024-07-24 21:53:29.019794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2512402 ] 00:05:49.854 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.129 [2024-07-24 21:53:29.088332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.129 [2024-07-24 21:53:29.152027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.698 21:53:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:50.698 21:53:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:50.698 21:53:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2512659 00:05:50.698 21:53:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2512659 /var/tmp/spdk2.sock 00:05:50.698 21:53:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:50.698 21:53:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:50.698 21:53:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2512659 /var/tmp/spdk2.sock 00:05:50.698 21:53:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:50.698 21:53:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:50.698 21:53:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:50.698 21:53:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:50.698 21:53:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2512659 /var/tmp/spdk2.sock 00:05:50.698 21:53:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2512659 ']' 00:05:50.698 21:53:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:50.698 21:53:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:50.698 21:53:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:50.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:50.698 21:53:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:50.698 21:53:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.698 [2024-07-24 21:53:29.859017] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:05:50.698 [2024-07-24 21:53:29.859071] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2512659 ] 00:05:50.698 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.958 [2024-07-24 21:53:29.956935] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2512402 has claimed it. 00:05:50.958 [2024-07-24 21:53:29.956975] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:51.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2512659) - No such process 00:05:51.583 ERROR: process (pid: 2512659) is no longer running 00:05:51.583 21:53:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:51.583 21:53:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:51.583 21:53:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:51.583 21:53:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:51.583 21:53:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:51.583 21:53:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:51.583 21:53:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2512402 00:05:51.583 21:53:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2512402 00:05:51.583 21:53:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:51.869 lslocks: write error 00:05:51.869 21:53:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2512402 00:05:51.869 21:53:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2512402 ']' 00:05:52.128 21:53:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2512402 00:05:52.128 21:53:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:52.128 21:53:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:52.128 21:53:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2512402 00:05:52.128 21:53:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:52.128 21:53:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:52.128 21:53:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2512402' 00:05:52.128 killing process with pid 2512402 00:05:52.128 21:53:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2512402 00:05:52.128 21:53:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2512402 00:05:52.387 00:05:52.387 real 0m2.481s 00:05:52.387 user 0m2.682s 00:05:52.387 sys 0m0.795s 00:05:52.387 21:53:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.387 21:53:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.387 ************************************ 00:05:52.387 END TEST locking_app_on_locked_coremask 00:05:52.387 ************************************ 00:05:52.387 21:53:31 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:52.387 21:53:31 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:52.387 21:53:31 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.387 21:53:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.387 ************************************ 00:05:52.387 START TEST locking_overlapped_coremask 00:05:52.387 ************************************ 00:05:52.387 21:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:05:52.387 21:53:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2512964 00:05:52.387 21:53:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2512964 /var/tmp/spdk.sock 00:05:52.387 21:53:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:52.387 21:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2512964 ']' 00:05:52.387 21:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.387 21:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:52.387 21:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.387 21:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:52.387 21:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.387 [2024-07-24 21:53:31.581123] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:05:52.387 [2024-07-24 21:53:31.581172] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2512964 ] 00:05:52.647 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.647 [2024-07-24 21:53:31.649645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:52.647 [2024-07-24 21:53:31.719236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.647 [2024-07-24 21:53:31.719330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.647 [2024-07-24 21:53:31.719333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.214 21:53:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:53.214 21:53:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:53.214 21:53:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2512978 00:05:53.214 21:53:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2512978 /var/tmp/spdk2.sock 00:05:53.214 21:53:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:53.214 21:53:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:53.214 21:53:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2512978 /var/tmp/spdk2.sock 00:05:53.214 21:53:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:53.214 21:53:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:53.214 21:53:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:53.214 21:53:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:53.214 21:53:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2512978 /var/tmp/spdk2.sock 00:05:53.214 21:53:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2512978 ']' 00:05:53.214 21:53:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:53.214 21:53:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:53.214 21:53:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:53.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:53.214 21:53:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:53.214 21:53:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.214 [2024-07-24 21:53:32.419199] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:05:53.214 [2024-07-24 21:53:32.419252] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2512978 ] 00:05:53.473 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.473 [2024-07-24 21:53:32.518996] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2512964 has claimed it. 00:05:53.473 [2024-07-24 21:53:32.519039] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:54.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2512978) - No such process 00:05:54.041 ERROR: process (pid: 2512978) is no longer running 00:05:54.041 21:53:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:54.041 21:53:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:54.041 21:53:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:54.041 21:53:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:54.041 21:53:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:54.041 21:53:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:54.041 21:53:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:54.041 21:53:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:54.041 21:53:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:54.041 21:53:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:54.041 21:53:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2512964 00:05:54.041 21:53:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 2512964 ']' 00:05:54.041 21:53:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 2512964 00:05:54.041 21:53:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:05:54.041 21:53:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:54.042 21:53:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2512964 00:05:54.042 21:53:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:54.042 21:53:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:54.042 21:53:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2512964' 00:05:54.042 killing process with pid 2512964 00:05:54.042 21:53:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 2512964 00:05:54.042 21:53:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 2512964 00:05:54.302 00:05:54.302 real 0m1.887s 00:05:54.302 user 0m5.273s 00:05:54.302 sys 0m0.457s 00:05:54.302 21:53:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.302 21:53:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.302 ************************************ 00:05:54.302 END TEST locking_overlapped_coremask 00:05:54.302 ************************************ 00:05:54.302 21:53:33 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:54.302 21:53:33 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.302 21:53:33 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.302 21:53:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.302 ************************************ 00:05:54.302 START TEST locking_overlapped_coremask_via_rpc 00:05:54.302 ************************************ 00:05:54.302 21:53:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:05:54.302 21:53:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2513265 00:05:54.302 21:53:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2513265 /var/tmp/spdk.sock 00:05:54.302 21:53:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:54.302 21:53:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2513265 ']' 00:05:54.302 21:53:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.302 21:53:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:54.302 21:53:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.302 21:53:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:54.302 21:53:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.562 [2024-07-24 21:53:33.555872] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:05:54.562 [2024-07-24 21:53:33.555921] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2513265 ] 00:05:54.562 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.562 [2024-07-24 21:53:33.626411] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:54.562 [2024-07-24 21:53:33.626437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:54.562 [2024-07-24 21:53:33.700993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.562 [2024-07-24 21:53:33.701090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.562 [2024-07-24 21:53:33.701092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.500 21:53:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:55.500 21:53:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:55.500 21:53:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2513472 00:05:55.500 21:53:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2513472 /var/tmp/spdk2.sock 00:05:55.500 21:53:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:55.500 21:53:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2513472 ']' 00:05:55.500 21:53:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.501 21:53:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:55.501 21:53:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.501 21:53:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:55.501 21:53:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.501 [2024-07-24 21:53:34.406420] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:05:55.501 [2024-07-24 21:53:34.406476] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2513472 ] 00:05:55.501 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.501 [2024-07-24 21:53:34.507412] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:55.501 [2024-07-24 21:53:34.507444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:55.501 [2024-07-24 21:53:34.646731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:55.501 [2024-07-24 21:53:34.649763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:55.501 [2024-07-24 21:53:34.649763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:56.070 21:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:56.070 21:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:56.070 21:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:56.070 21:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.070 21:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.070 21:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.070 21:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:56.070 21:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:56.070 21:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:56.070 21:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:56.070 21:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:56.070 21:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:56.070 21:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:56.070 21:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:56.070 21:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.070 21:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.070 [2024-07-24 21:53:35.236795] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2513265 has claimed it. 00:05:56.070 request: 00:05:56.070 { 00:05:56.070 "method": "framework_enable_cpumask_locks", 00:05:56.070 "req_id": 1 00:05:56.070 } 00:05:56.070 Got JSON-RPC error response 00:05:56.070 response: 00:05:56.070 { 00:05:56.070 "code": -32603, 00:05:56.070 "message": "Failed to claim CPU core: 2" 00:05:56.070 } 00:05:56.070 21:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:56.070 21:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:56.070 21:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:56.070 21:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:56.070 21:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:56.070 21:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2513265 /var/tmp/spdk.sock 00:05:56.070 21:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2513265 ']' 00:05:56.070 21:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.070 21:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:56.070 21:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.070 21:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:56.070 21:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.330 21:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:56.330 21:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:56.330 21:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2513472 /var/tmp/spdk2.sock 00:05:56.330 21:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2513472 ']' 00:05:56.330 21:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:56.330 21:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:56.330 21:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:56.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:56.330 21:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:56.330 21:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.589 21:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:56.589 21:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:56.589 21:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:56.589 21:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:56.589 21:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:56.589 21:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:56.589 00:05:56.589 real 0m2.112s 00:05:56.589 user 0m0.831s 00:05:56.589 sys 0m0.208s 00:05:56.589 21:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.589 21:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.589 ************************************ 00:05:56.589 END TEST locking_overlapped_coremask_via_rpc 00:05:56.589 ************************************ 00:05:56.589 21:53:35 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:56.589 21:53:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2513265 ]] 00:05:56.589 21:53:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2513265 00:05:56.589 21:53:35 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2513265 ']' 00:05:56.589 21:53:35 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2513265 00:05:56.589 21:53:35 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:56.589 21:53:35 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:56.589 21:53:35 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2513265 00:05:56.589 21:53:35 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:56.589 21:53:35 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:56.589 21:53:35 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2513265' 00:05:56.589 killing process with pid 2513265 00:05:56.589 21:53:35 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2513265 00:05:56.589 21:53:35 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2513265 00:05:56.848 21:53:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2513472 ]] 00:05:56.848 21:53:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2513472 00:05:56.849 21:53:36 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2513472 ']' 00:05:56.849 21:53:36 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2513472 00:05:56.849 21:53:36 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:56.849 21:53:36 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:56.849 21:53:36 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2513472 00:05:57.108 21:53:36 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:57.108 21:53:36 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:57.108 21:53:36 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2513472' 00:05:57.108 killing process with pid 2513472 00:05:57.108 21:53:36 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2513472 00:05:57.108 21:53:36 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2513472 00:05:57.368 21:53:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:57.368 21:53:36 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:57.368 21:53:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2513265 ]] 00:05:57.368 21:53:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2513265 00:05:57.368 21:53:36 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2513265 ']' 00:05:57.368 21:53:36 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2513265 00:05:57.368 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2513265) - No such process 00:05:57.368 21:53:36 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2513265 is not found' 00:05:57.368 Process with pid 2513265 is not found 00:05:57.368 21:53:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2513472 ]] 00:05:57.368 21:53:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2513472 00:05:57.368 21:53:36 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2513472 ']' 00:05:57.368 21:53:36 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2513472 00:05:57.368 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2513472) - No such process 00:05:57.368 21:53:36 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2513472 is not found' 00:05:57.368 Process with pid 2513472 is not found 00:05:57.368 21:53:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:57.368 00:05:57.368 real 0m18.612s 00:05:57.368 user 0m30.732s 00:05:57.368 sys 0m6.099s 00:05:57.368 21:53:36 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.368 21:53:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.368 ************************************ 00:05:57.368 END TEST cpu_locks 00:05:57.368 ************************************ 00:05:57.368 00:05:57.368 real 0m43.461s 00:05:57.368 user 1m20.116s 00:05:57.368 sys 0m10.138s 00:05:57.368 21:53:36 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.368 21:53:36 event -- common/autotest_common.sh@10 -- # set +x 00:05:57.368 ************************************ 00:05:57.368 END TEST event 00:05:57.368 ************************************ 00:05:57.368 21:53:36 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:57.368 21:53:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:57.368 21:53:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.368 21:53:36 -- common/autotest_common.sh@10 -- # set +x 00:05:57.368 ************************************ 00:05:57.368 START TEST thread 00:05:57.368 ************************************ 00:05:57.368 21:53:36 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:57.627 * Looking for test storage... 00:05:57.627 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:57.627 21:53:36 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:57.627 21:53:36 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:57.627 21:53:36 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.627 21:53:36 thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.627 ************************************ 00:05:57.627 START TEST thread_poller_perf 00:05:57.627 ************************************ 00:05:57.627 21:53:36 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:57.627 [2024-07-24 21:53:36.678044] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:05:57.627 [2024-07-24 21:53:36.678118] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2513902 ] 00:05:57.627 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.627 [2024-07-24 21:53:36.750783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.627 [2024-07-24 21:53:36.820254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.627 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:59.008 ====================================== 00:05:59.008 busy:2509379428 (cyc) 00:05:59.008 total_run_count: 430000 00:05:59.008 tsc_hz: 2500000000 (cyc) 00:05:59.008 ====================================== 00:05:59.008 poller_cost: 5835 (cyc), 2334 (nsec) 00:05:59.008 00:05:59.008 real 0m1.236s 00:05:59.008 user 0m1.146s 00:05:59.008 sys 0m0.086s 00:05:59.008 21:53:37 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.008 21:53:37 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:59.008 ************************************ 00:05:59.008 END TEST thread_poller_perf 00:05:59.008 ************************************ 00:05:59.008 21:53:37 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:59.008 21:53:37 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:59.008 21:53:37 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.008 21:53:37 thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.008 ************************************ 00:05:59.008 START TEST thread_poller_perf 00:05:59.008 ************************************ 00:05:59.008 21:53:37 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:59.008 [2024-07-24 21:53:37.988306] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:05:59.008 [2024-07-24 21:53:37.988383] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2514185 ] 00:05:59.008 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.008 [2024-07-24 21:53:38.060679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.008 [2024-07-24 21:53:38.127075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.008 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:00.388 ====================================== 00:06:00.388 busy:2501686848 (cyc) 00:06:00.388 total_run_count: 5578000 00:06:00.388 tsc_hz: 2500000000 (cyc) 00:06:00.388 ====================================== 00:06:00.388 poller_cost: 448 (cyc), 179 (nsec) 00:06:00.388 00:06:00.388 real 0m1.230s 00:06:00.388 user 0m1.129s 00:06:00.388 sys 0m0.097s 00:06:00.388 21:53:39 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.388 21:53:39 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:00.388 ************************************ 00:06:00.388 END TEST thread_poller_perf 00:06:00.388 ************************************ 00:06:00.388 21:53:39 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:00.388 00:06:00.388 real 0m2.705s 00:06:00.388 user 0m2.360s 00:06:00.388 sys 0m0.357s 00:06:00.388 21:53:39 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.388 21:53:39 thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.388 ************************************ 00:06:00.388 END TEST thread 00:06:00.388 ************************************ 00:06:00.388 21:53:39 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:06:00.388 21:53:39 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:00.388 21:53:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:00.388 21:53:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.388 21:53:39 -- common/autotest_common.sh@10 -- # set +x 00:06:00.388 ************************************ 00:06:00.388 START TEST app_cmdline 00:06:00.388 ************************************ 00:06:00.388 21:53:39 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:00.388 * Looking for test storage... 00:06:00.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:00.388 21:53:39 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:00.388 21:53:39 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2514503 00:06:00.388 21:53:39 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2514503 00:06:00.388 21:53:39 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:00.388 21:53:39 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 2514503 ']' 00:06:00.388 21:53:39 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.388 21:53:39 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:00.388 21:53:39 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.388 21:53:39 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:00.388 21:53:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:00.388 [2024-07-24 21:53:39.478657] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:06:00.388 [2024-07-24 21:53:39.478706] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2514503 ] 00:06:00.388 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.388 [2024-07-24 21:53:39.548192] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.647 [2024-07-24 21:53:39.621464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.216 21:53:40 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:01.216 21:53:40 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:01.216 21:53:40 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:01.476 { 00:06:01.476 "version": "SPDK v24.09-pre git sha1 38b03952e", 00:06:01.476 "fields": { 00:06:01.476 "major": 24, 00:06:01.476 "minor": 9, 00:06:01.476 "patch": 0, 00:06:01.476 "suffix": "-pre", 00:06:01.476 "commit": "38b03952e" 00:06:01.476 } 00:06:01.476 } 00:06:01.476 21:53:40 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:01.476 21:53:40 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:01.476 21:53:40 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:01.476 21:53:40 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:01.476 21:53:40 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:01.476 21:53:40 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:01.476 21:53:40 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:01.476 21:53:40 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.476 21:53:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:01.476 21:53:40 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.476 21:53:40 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:01.476 21:53:40 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:01.476 21:53:40 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:01.476 21:53:40 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:01.476 21:53:40 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:01.476 21:53:40 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:01.476 21:53:40 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.476 21:53:40 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:01.476 21:53:40 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.476 21:53:40 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:01.476 21:53:40 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.476 21:53:40 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:01.476 21:53:40 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:01.476 21:53:40 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:01.476 request: 00:06:01.476 { 00:06:01.476 "method": "env_dpdk_get_mem_stats", 00:06:01.476 "req_id": 1 00:06:01.476 } 00:06:01.476 Got JSON-RPC error response 00:06:01.476 response: 00:06:01.476 { 00:06:01.476 "code": -32601, 00:06:01.476 "message": "Method not found" 00:06:01.476 } 00:06:01.476 21:53:40 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:01.476 21:53:40 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:01.476 21:53:40 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:01.476 21:53:40 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:01.476 21:53:40 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2514503 00:06:01.476 21:53:40 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 2514503 ']' 00:06:01.476 21:53:40 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 2514503 00:06:01.476 21:53:40 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:01.476 21:53:40 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:01.476 21:53:40 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2514503 00:06:01.735 21:53:40 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:01.735 21:53:40 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:01.735 21:53:40 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2514503' 00:06:01.735 killing process with pid 2514503 00:06:01.735 21:53:40 app_cmdline -- common/autotest_common.sh@969 -- # kill 2514503 00:06:01.735 21:53:40 app_cmdline -- common/autotest_common.sh@974 -- # wait 2514503 00:06:01.994 00:06:01.994 real 0m1.701s 00:06:01.994 user 0m1.969s 00:06:01.994 sys 0m0.498s 00:06:01.994 21:53:41 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.994 21:53:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:01.994 ************************************ 00:06:01.994 END TEST app_cmdline 00:06:01.994 ************************************ 00:06:01.994 21:53:41 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:01.994 21:53:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.994 21:53:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.994 21:53:41 -- common/autotest_common.sh@10 -- # set +x 00:06:01.994 ************************************ 00:06:01.994 START TEST version 00:06:01.994 ************************************ 00:06:01.994 21:53:41 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:01.994 * Looking for test storage... 00:06:01.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:01.994 21:53:41 version -- app/version.sh@17 -- # get_header_version major 00:06:01.994 21:53:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:01.994 21:53:41 version -- app/version.sh@14 -- # cut -f2 00:06:01.994 21:53:41 version -- app/version.sh@14 -- # tr -d '"' 00:06:01.994 21:53:41 version -- app/version.sh@17 -- # major=24 00:06:01.994 21:53:41 version -- app/version.sh@18 -- # get_header_version minor 00:06:01.994 21:53:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:01.994 21:53:41 version -- app/version.sh@14 -- # cut -f2 00:06:01.994 21:53:41 version -- app/version.sh@14 -- # tr -d '"' 00:06:01.994 21:53:41 version -- app/version.sh@18 -- # minor=9 00:06:01.994 21:53:41 version -- app/version.sh@19 -- # get_header_version patch 00:06:01.994 21:53:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:01.994 21:53:41 version -- app/version.sh@14 -- # tr -d '"' 00:06:01.994 21:53:41 version -- app/version.sh@14 -- # cut -f2 00:06:01.994 21:53:41 version -- app/version.sh@19 -- # patch=0 00:06:01.994 21:53:41 version -- app/version.sh@20 -- # get_header_version suffix 00:06:01.995 21:53:41 version -- app/version.sh@14 -- # cut -f2 00:06:01.995 21:53:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:01.995 21:53:41 version -- app/version.sh@14 -- # tr -d '"' 00:06:02.253 21:53:41 version -- app/version.sh@20 -- # suffix=-pre 00:06:02.253 21:53:41 version -- app/version.sh@22 -- # version=24.9 00:06:02.253 21:53:41 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:02.253 21:53:41 version -- app/version.sh@28 -- # version=24.9rc0 00:06:02.253 21:53:41 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:02.253 21:53:41 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:02.253 21:53:41 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:02.253 21:53:41 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:02.253 00:06:02.253 real 0m0.157s 00:06:02.253 user 0m0.071s 00:06:02.253 sys 0m0.121s 00:06:02.253 21:53:41 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.253 21:53:41 version -- common/autotest_common.sh@10 -- # set +x 00:06:02.253 ************************************ 00:06:02.253 END TEST version 00:06:02.253 ************************************ 00:06:02.253 21:53:41 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:06:02.253 21:53:41 -- spdk/autotest.sh@202 -- # uname -s 00:06:02.253 21:53:41 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:06:02.253 21:53:41 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:06:02.253 21:53:41 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:06:02.253 21:53:41 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:06:02.253 21:53:41 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:06:02.253 21:53:41 -- spdk/autotest.sh@264 -- # timing_exit lib 00:06:02.253 21:53:41 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:02.253 21:53:41 -- common/autotest_common.sh@10 -- # set +x 00:06:02.253 21:53:41 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:06:02.253 21:53:41 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:06:02.253 21:53:41 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:06:02.253 21:53:41 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:06:02.253 21:53:41 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:06:02.253 21:53:41 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:06:02.253 21:53:41 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:02.253 21:53:41 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:02.253 21:53:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.253 21:53:41 -- common/autotest_common.sh@10 -- # set +x 00:06:02.253 ************************************ 00:06:02.253 START TEST nvmf_tcp 00:06:02.253 ************************************ 00:06:02.253 21:53:41 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:02.253 * Looking for test storage... 00:06:02.253 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:02.253 21:53:41 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:02.512 21:53:41 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:02.512 21:53:41 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:02.512 21:53:41 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:02.512 21:53:41 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.512 21:53:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:02.512 ************************************ 00:06:02.512 START TEST nvmf_target_core 00:06:02.512 ************************************ 00:06:02.512 21:53:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:02.512 * Looking for test storage... 00:06:02.512 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:02.512 21:53:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:02.512 21:53:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:02.512 21:53:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:02.512 21:53:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:02.512 21:53:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:02.512 21:53:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:02.512 21:53:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:02.512 21:53:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:02.512 21:53:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:02.512 21:53:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:02.512 21:53:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:02.512 21:53:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:02.512 21:53:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:02.512 21:53:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:02.512 21:53:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:06:02.512 21:53:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:06:02.512 21:53:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:02.512 21:53:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:02.512 21:53:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:02.512 21:53:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:02.512 21:53:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:02.512 21:53:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:02.512 21:53:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:02.512 21:53:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:02.512 21:53:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.512 21:53:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.512 21:53:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.512 21:53:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:02.512 21:53:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.512 21:53:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:06:02.512 21:53:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:02.512 21:53:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:02.512 21:53:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:02.512 21:53:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:02.512 21:53:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:02.512 21:53:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:02.512 21:53:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:02.512 21:53:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:02.512 21:53:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:02.512 21:53:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:02.512 21:53:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:02.512 21:53:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:02.512 21:53:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:02.512 21:53:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.512 21:53:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:02.512 ************************************ 00:06:02.512 START TEST nvmf_abort 00:06:02.512 ************************************ 00:06:02.512 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:02.772 * Looking for test storage... 00:06:02.772 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:06:02.772 21:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:09.349 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:09.349 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:09.349 Found net devices under 0000:af:00.0: cvl_0_0 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:09.349 Found net devices under 0000:af:00.1: cvl_0_1 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:09.349 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:09.350 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:09.350 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:09.350 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:09.350 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:09.350 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:09.350 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:06:09.350 00:06:09.350 --- 10.0.0.2 ping statistics --- 00:06:09.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:09.350 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:06:09.350 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:09.350 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:09.350 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:06:09.350 00:06:09.350 --- 10.0.0.1 ping statistics --- 00:06:09.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:09.350 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:06:09.350 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:09.350 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:06:09.350 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:09.350 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:09.350 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:09.350 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:09.350 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:09.350 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:09.350 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:09.610 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:09.610 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:09.610 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:09.610 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:09.610 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2518309 00:06:09.610 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:09.610 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2518309 00:06:09.610 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 2518309 ']' 00:06:09.610 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.610 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:09.610 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.610 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:09.610 21:53:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:09.610 [2024-07-24 21:53:48.625597] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:06:09.610 [2024-07-24 21:53:48.625642] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:09.610 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.610 [2024-07-24 21:53:48.700136] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:09.610 [2024-07-24 21:53:48.770136] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:09.610 [2024-07-24 21:53:48.770179] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:09.610 [2024-07-24 21:53:48.770189] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:09.610 [2024-07-24 21:53:48.770197] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:09.610 [2024-07-24 21:53:48.770204] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:09.610 [2024-07-24 21:53:48.770307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.610 [2024-07-24 21:53:48.770392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:09.610 [2024-07-24 21:53:48.770394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.548 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:10.548 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:06:10.548 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:10.548 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:10.548 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:10.548 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:10.548 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:10.548 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.548 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:10.548 [2024-07-24 21:53:49.490361] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:10.548 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.548 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:10.548 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.548 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:10.548 Malloc0 00:06:10.548 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.548 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:10.548 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.548 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:10.548 Delay0 00:06:10.548 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.548 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:10.548 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.548 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:10.548 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.548 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:10.548 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.548 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:10.548 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.548 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:10.548 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.548 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:10.548 [2024-07-24 21:53:49.562707] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:10.548 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.548 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:10.548 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.548 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:10.548 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.548 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:10.548 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.548 [2024-07-24 21:53:49.637197] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:13.086 [2024-07-24 21:53:51.721165] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x194f3f0 is same with the state(5) to be set 00:06:13.086 Initializing NVMe Controllers 00:06:13.086 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:13.086 controller IO queue size 128 less than required 00:06:13.086 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:13.086 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:13.086 Initialization complete. Launching workers. 00:06:13.086 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 42173 00:06:13.086 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 42238, failed to submit 62 00:06:13.086 success 42177, unsuccess 61, failed 0 00:06:13.086 21:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:13.086 21:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.086 21:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:13.086 21:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.086 21:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:13.086 21:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:13.086 21:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:13.086 21:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:06:13.086 21:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:13.086 21:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:06:13.086 21:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:13.086 21:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:13.086 rmmod nvme_tcp 00:06:13.086 rmmod nvme_fabrics 00:06:13.086 rmmod nvme_keyring 00:06:13.087 21:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:13.087 21:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:06:13.087 21:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:06:13.087 21:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2518309 ']' 00:06:13.087 21:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2518309 00:06:13.087 21:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 2518309 ']' 00:06:13.087 21:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 2518309 00:06:13.087 21:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:06:13.087 21:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:13.087 21:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2518309 00:06:13.087 21:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:13.087 21:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:13.087 21:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2518309' 00:06:13.087 killing process with pid 2518309 00:06:13.087 21:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 2518309 00:06:13.087 21:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 2518309 00:06:13.087 21:53:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:13.087 21:53:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:13.087 21:53:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:13.087 21:53:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:13.087 21:53:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:13.087 21:53:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:13.087 21:53:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:13.087 21:53:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:14.994 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:14.994 00:06:14.994 real 0m12.459s 00:06:14.994 user 0m13.192s 00:06:14.994 sys 0m6.322s 00:06:14.994 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.994 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:14.994 ************************************ 00:06:14.994 END TEST nvmf_abort 00:06:14.994 ************************************ 00:06:14.994 21:53:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:14.994 21:53:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:14.994 21:53:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.994 21:53:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:15.254 ************************************ 00:06:15.254 START TEST nvmf_ns_hotplug_stress 00:06:15.254 ************************************ 00:06:15.254 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:15.254 * Looking for test storage... 00:06:15.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:15.254 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:15.254 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:15.254 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:15.254 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:15.254 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:15.254 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:15.254 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:15.254 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:15.254 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:15.254 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:15.254 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:15.254 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:15.254 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:06:15.254 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:06:15.254 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:15.254 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:15.254 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:15.254 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:15.254 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:15.254 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:15.254 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:15.254 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:15.254 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.255 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.255 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.255 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:15.255 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.255 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:06:15.255 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:15.255 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:15.255 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:15.255 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:15.255 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:15.255 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:15.255 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:15.255 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:15.255 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:15.255 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:15.255 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:15.255 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:15.255 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:15.255 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:15.255 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:15.255 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:15.255 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:15.255 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:15.255 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:15.255 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:15.255 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:06:15.255 21:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:21.841 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:21.841 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:06:21.841 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:21.842 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:21.842 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:21.842 Found net devices under 0000:af:00.0: cvl_0_0 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:21.842 Found net devices under 0000:af:00.1: cvl_0_1 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:21.842 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:21.842 21:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:21.842 21:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:22.107 21:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:22.107 21:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:22.107 21:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:22.107 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:22.107 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:06:22.107 00:06:22.107 --- 10.0.0.2 ping statistics --- 00:06:22.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:22.108 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:06:22.108 21:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:22.108 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:22.108 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:06:22.108 00:06:22.108 --- 10.0.0.1 ping statistics --- 00:06:22.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:22.108 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:06:22.108 21:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:22.108 21:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:06:22.108 21:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:22.108 21:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:22.108 21:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:22.108 21:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:22.108 21:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:22.108 21:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:22.108 21:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:22.108 21:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:22.108 21:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:22.108 21:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:22.108 21:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:22.108 21:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2522639 00:06:22.108 21:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2522639 00:06:22.108 21:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:22.108 21:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 2522639 ']' 00:06:22.108 21:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.108 21:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:22.108 21:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.108 21:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:22.108 21:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:22.108 [2024-07-24 21:54:01.222170] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:06:22.108 [2024-07-24 21:54:01.222217] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:22.108 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.108 [2024-07-24 21:54:01.293614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:22.367 [2024-07-24 21:54:01.361831] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:22.367 [2024-07-24 21:54:01.361871] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:22.367 [2024-07-24 21:54:01.361880] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:22.367 [2024-07-24 21:54:01.361889] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:22.367 [2024-07-24 21:54:01.361895] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:22.367 [2024-07-24 21:54:01.362005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.367 [2024-07-24 21:54:01.362093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:22.367 [2024-07-24 21:54:01.362095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.935 21:54:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:22.935 21:54:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:06:22.935 21:54:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:22.935 21:54:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:22.935 21:54:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:22.935 21:54:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:22.935 21:54:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:22.935 21:54:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:23.194 [2024-07-24 21:54:02.213951] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:23.194 21:54:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:23.454 21:54:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:23.454 [2024-07-24 21:54:02.598721] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:23.454 21:54:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:23.713 21:54:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:23.972 Malloc0 00:06:23.972 21:54:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:23.972 Delay0 00:06:23.972 21:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.231 21:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:24.490 NULL1 00:06:24.490 21:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:24.749 21:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2523160 00:06:24.749 21:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:24.749 21:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:24.749 21:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.749 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.749 21:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.008 21:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:25.008 21:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:25.267 true 00:06:25.267 21:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:25.267 21:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.527 21:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.527 21:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:25.527 21:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:25.786 true 00:06:25.786 21:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:25.786 21:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.045 21:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.304 21:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:26.304 21:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:26.304 true 00:06:26.304 21:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:26.304 21:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.563 21:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.821 21:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:26.821 21:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:26.821 true 00:06:27.080 21:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:27.080 21:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.080 21:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.338 21:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:27.338 21:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:27.599 true 00:06:27.599 21:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:27.599 21:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.599 21:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.903 21:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:27.903 21:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:28.163 true 00:06:28.163 21:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:28.163 21:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.422 21:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.422 21:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:28.422 21:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:28.682 true 00:06:28.682 21:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:28.682 21:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.941 21:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.200 21:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:29.200 21:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:29.200 true 00:06:29.200 21:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:29.200 21:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.459 21:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.718 21:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:29.718 21:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:29.977 true 00:06:29.977 21:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:29.977 21:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.977 21:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.239 21:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:30.239 21:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:30.500 true 00:06:30.500 21:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:30.500 21:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.760 21:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.760 21:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:30.760 21:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:31.019 true 00:06:31.019 21:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:31.019 21:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.278 21:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.537 21:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:31.537 21:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:31.537 true 00:06:31.537 21:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:31.537 21:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.796 21:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.056 21:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:32.056 21:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:32.315 true 00:06:32.315 21:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:32.315 21:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.574 21:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.574 21:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:32.574 21:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:32.833 true 00:06:32.833 21:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:32.833 21:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.092 21:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.350 21:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:33.350 21:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:33.350 true 00:06:33.350 21:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:33.350 21:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.609 21:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.868 21:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:33.868 21:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:33.868 true 00:06:34.127 21:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:34.127 21:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.127 21:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.385 21:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:34.385 21:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:34.644 true 00:06:34.644 21:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:34.644 21:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.902 21:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.902 21:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:34.903 21:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:35.162 true 00:06:35.162 21:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:35.162 21:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.422 21:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.681 21:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:35.681 21:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:35.681 true 00:06:35.681 21:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:35.681 21:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.939 21:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.198 21:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:36.198 21:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:36.457 true 00:06:36.457 21:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:36.457 21:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.458 21:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.717 21:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:36.717 21:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:36.976 true 00:06:36.976 21:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:36.976 21:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.235 21:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.235 21:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:37.235 21:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:37.494 true 00:06:37.494 21:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:37.494 21:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.753 21:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.012 21:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:38.012 21:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:38.012 true 00:06:38.012 21:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:38.012 21:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.272 21:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.531 21:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:38.531 21:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:38.790 true 00:06:38.790 21:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:38.790 21:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.050 21:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.050 21:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:39.050 21:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:39.310 true 00:06:39.310 21:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:39.310 21:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.569 21:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.827 21:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:39.827 21:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:39.827 true 00:06:39.827 21:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:39.827 21:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.086 21:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.372 21:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:40.372 21:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:40.372 true 00:06:40.634 21:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:40.634 21:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.634 21:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.894 21:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:40.894 21:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:41.153 true 00:06:41.153 21:54:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:41.153 21:54:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.153 21:54:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.413 21:54:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:41.413 21:54:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:41.672 true 00:06:41.672 21:54:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:41.672 21:54:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.931 21:54:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.931 21:54:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:41.931 21:54:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:42.190 true 00:06:42.190 21:54:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:42.190 21:54:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.448 21:54:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.708 21:54:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:06:42.708 21:54:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:06:42.708 true 00:06:42.708 21:54:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:42.708 21:54:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.966 21:54:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.227 21:54:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:06:43.227 21:54:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:06:43.486 true 00:06:43.486 21:54:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:43.486 21:54:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.486 21:54:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.744 21:54:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:06:43.744 21:54:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:06:44.004 true 00:06:44.004 21:54:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:44.004 21:54:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.263 21:54:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.263 21:54:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:06:44.263 21:54:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:06:44.525 true 00:06:44.525 21:54:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:44.525 21:54:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.785 21:54:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.043 21:54:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:06:45.043 21:54:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:06:45.043 true 00:06:45.043 21:54:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:45.043 21:54:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.302 21:54:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.561 21:54:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:06:45.561 21:54:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:06:45.821 true 00:06:45.821 21:54:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:45.821 21:54:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.080 21:54:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.080 21:54:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:06:46.080 21:54:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:06:46.339 true 00:06:46.339 21:54:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:46.339 21:54:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.607 21:54:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.867 21:54:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:06:46.867 21:54:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:06:46.867 true 00:06:46.867 21:54:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:46.867 21:54:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.126 21:54:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.385 21:54:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:06:47.385 21:54:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:06:47.385 true 00:06:47.644 21:54:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:47.644 21:54:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.644 21:54:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.903 21:54:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:06:47.903 21:54:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:06:48.161 true 00:06:48.161 21:54:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:48.161 21:54:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.419 21:54:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.419 21:54:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:06:48.419 21:54:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:06:48.678 true 00:06:48.678 21:54:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:48.678 21:54:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.937 21:54:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.197 21:54:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:06:49.197 21:54:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:06:49.197 true 00:06:49.197 21:54:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:49.197 21:54:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.456 21:54:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.714 21:54:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:06:49.714 21:54:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:06:49.973 true 00:06:49.973 21:54:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:49.973 21:54:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.973 21:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.232 21:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:06:50.232 21:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:06:50.490 true 00:06:50.490 21:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:50.490 21:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.749 21:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.749 21:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:06:50.749 21:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:06:51.008 true 00:06:51.008 21:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:51.008 21:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.271 21:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.543 21:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:06:51.543 21:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:06:51.543 true 00:06:51.543 21:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:51.543 21:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.801 21:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.060 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:06:52.060 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:06:52.319 true 00:06:52.319 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:52.319 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.319 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.578 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:06:52.578 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:06:52.836 true 00:06:52.836 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:52.836 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.095 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.095 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:06:53.095 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:06:53.354 true 00:06:53.354 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:53.354 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.612 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.871 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:06:53.871 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:06:53.871 true 00:06:53.871 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:53.871 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.131 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.390 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:06:54.390 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:06:54.650 true 00:06:54.650 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:54.650 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.650 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.909 Initializing NVMe Controllers 00:06:54.909 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:54.909 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:06:54.909 Controller IO queue size 128, less than required. 00:06:54.909 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:54.909 WARNING: Some requested NVMe devices were skipped 00:06:54.909 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:54.909 Initialization complete. Launching workers. 00:06:54.909 ======================================================== 00:06:54.909 Latency(us) 00:06:54.909 Device Information : IOPS MiB/s Average min max 00:06:54.909 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 28679.76 14.00 4463.28 2150.42 7894.99 00:06:54.909 ======================================================== 00:06:54.909 Total : 28679.76 14.00 4463.28 2150.42 7894.99 00:06:54.909 00:06:54.910 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:06:54.910 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:06:55.168 true 00:06:55.168 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2523160 00:06:55.168 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2523160) - No such process 00:06:55.168 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2523160 00:06:55.168 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.426 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:55.426 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:55.426 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:55.426 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:55.426 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:55.426 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:55.685 null0 00:06:55.685 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:55.685 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:55.685 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:55.944 null1 00:06:55.944 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:55.944 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:55.944 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:55.944 null2 00:06:55.944 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:55.944 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:55.944 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:56.204 null3 00:06:56.204 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:56.204 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:56.204 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:56.463 null4 00:06:56.463 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:56.463 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:56.463 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:56.463 null5 00:06:56.723 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:56.723 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:56.723 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:56.723 null6 00:06:56.723 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:56.723 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:56.723 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:56.983 null7 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2529228 2529231 2529234 2529236 2529240 2529243 2529246 2529249 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.983 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:57.243 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:57.243 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.243 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:57.243 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:57.243 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:57.243 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:57.243 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:57.243 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:57.243 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.243 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.243 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:57.243 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.243 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.243 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:57.243 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.243 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.243 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:57.243 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.243 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.243 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:57.243 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.243 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.243 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:57.243 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.243 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.243 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:57.243 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.243 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.243 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:57.243 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.243 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.243 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:57.502 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:57.502 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:57.502 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:57.502 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:57.502 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:57.502 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.502 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:57.502 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:57.761 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.762 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.762 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:57.762 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.762 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.762 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:57.762 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.762 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.762 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:57.762 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.762 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.762 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:57.762 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.762 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.762 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:57.762 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.762 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.762 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:57.762 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.762 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.762 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:57.762 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.762 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.762 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:58.020 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:58.020 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:58.020 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:58.021 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:58.021 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.021 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:58.021 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:58.021 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:58.021 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.021 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.021 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:58.021 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.021 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.021 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.021 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.021 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:58.021 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:58.021 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.021 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.021 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:58.021 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.021 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.021 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:58.021 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.021 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.021 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:58.021 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.021 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.021 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:58.021 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.021 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.021 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:58.280 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:58.280 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:58.280 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:58.280 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:58.280 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.280 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:58.280 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:58.280 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:58.540 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.540 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.540 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:58.540 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.540 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.540 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.540 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.540 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:58.540 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:58.540 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.540 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.540 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:58.540 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.540 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.540 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:58.540 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.541 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.541 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:58.541 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.541 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.541 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:58.541 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.541 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.541 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:58.541 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:58.541 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:58.541 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:58.541 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.541 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:58.541 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:58.541 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:58.800 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:58.800 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.800 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.800 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:58.800 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.800 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.800 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.800 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.800 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:58.800 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:58.800 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.800 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.800 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:58.800 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.800 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.800 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:58.800 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.800 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.800 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:58.800 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.800 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.800 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:58.800 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.800 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.800 21:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:59.059 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:59.059 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:59.059 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:59.059 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:59.059 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:59.059 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:59.059 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:59.059 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.318 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.318 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.318 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:59.318 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.318 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.318 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:59.318 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.318 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.318 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:59.318 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.318 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.318 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:59.318 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.318 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.318 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:59.318 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.318 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.318 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:59.318 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.318 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.318 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:59.318 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.318 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.318 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:59.318 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:59.318 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:59.318 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:59.318 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:59.318 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:59.318 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:59.318 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:59.318 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.577 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.577 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.577 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:59.577 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.577 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.577 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:59.577 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.577 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.577 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:59.577 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.577 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.577 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:59.577 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.577 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.577 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:59.577 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.577 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.577 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:59.577 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.577 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.577 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:59.577 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.577 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.577 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:59.836 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:59.836 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:59.836 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:59.836 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:59.836 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.836 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:59.836 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:59.836 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:00.095 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.095 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.095 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:00.095 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.095 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.095 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:00.095 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.095 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.095 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:00.095 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.095 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.095 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:00.095 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.095 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.095 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:00.095 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.095 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.095 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:00.095 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.095 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.095 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:00.095 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.095 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.095 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:00.095 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:00.095 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:00.095 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:00.095 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:00.095 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:00.095 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:00.095 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.095 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:00.355 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.355 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.355 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:00.355 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.355 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.356 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:00.356 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.356 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.356 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:00.356 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.356 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.356 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:00.356 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.356 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.356 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:00.356 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.356 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.356 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:00.356 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.356 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.356 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:00.356 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.356 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.356 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:00.615 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:00.615 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.615 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:00.615 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:00.615 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:00.615 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:00.615 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:00.615 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:00.615 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.615 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.615 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.615 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.615 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.615 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.615 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.615 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.615 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.615 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.615 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.615 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.616 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.616 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.616 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.616 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.616 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:00.616 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:00.616 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:00.616 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:07:00.616 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:00.616 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:07:00.616 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:00.616 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:00.616 rmmod nvme_tcp 00:07:00.875 rmmod nvme_fabrics 00:07:00.875 rmmod nvme_keyring 00:07:00.875 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:00.875 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:07:00.875 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:07:00.875 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2522639 ']' 00:07:00.875 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2522639 00:07:00.875 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 2522639 ']' 00:07:00.875 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 2522639 00:07:00.875 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:07:00.875 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:00.875 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2522639 00:07:00.875 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:00.875 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:00.875 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2522639' 00:07:00.875 killing process with pid 2522639 00:07:00.875 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 2522639 00:07:00.875 21:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 2522639 00:07:01.135 21:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:01.135 21:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:01.135 21:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:01.135 21:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:01.135 21:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:01.135 21:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:01.135 21:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:01.135 21:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:03.042 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:03.042 00:07:03.042 real 0m47.981s 00:07:03.042 user 3m11.890s 00:07:03.042 sys 0m23.147s 00:07:03.042 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.043 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:03.043 ************************************ 00:07:03.043 END TEST nvmf_ns_hotplug_stress 00:07:03.043 ************************************ 00:07:03.043 21:54:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:03.043 21:54:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:03.043 21:54:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.043 21:54:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:03.349 ************************************ 00:07:03.349 START TEST nvmf_delete_subsystem 00:07:03.349 ************************************ 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:03.349 * Looking for test storage... 00:07:03.349 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:03.349 21:54:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:09.943 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:09.943 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:09.943 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:09.944 Found net devices under 0000:af:00.0: cvl_0_0 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:09.944 Found net devices under 0000:af:00.1: cvl_0_1 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:09.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:09.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:07:09.944 00:07:09.944 --- 10.0.0.2 ping statistics --- 00:07:09.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:09.944 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:09.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:09.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:07:09.944 00:07:09.944 --- 10.0.0.1 ping statistics --- 00:07:09.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:09.944 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2533735 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2533735 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 2533735 ']' 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:09.944 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:09.944 [2024-07-24 21:54:48.743776] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:07:09.944 [2024-07-24 21:54:48.743821] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:09.944 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.944 [2024-07-24 21:54:48.817118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:09.944 [2024-07-24 21:54:48.891930] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:09.944 [2024-07-24 21:54:48.891967] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:09.944 [2024-07-24 21:54:48.891977] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:09.944 [2024-07-24 21:54:48.891986] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:09.944 [2024-07-24 21:54:48.891994] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:09.944 [2024-07-24 21:54:48.892042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.944 [2024-07-24 21:54:48.892044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.513 21:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:10.513 21:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:07:10.513 21:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:10.513 21:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:10.513 21:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:10.513 21:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:10.513 21:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:10.513 21:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.513 21:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:10.513 [2024-07-24 21:54:49.595270] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:10.513 21:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.513 21:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:10.513 21:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.513 21:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:10.513 21:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.513 21:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:10.513 21:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.513 21:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:10.513 [2024-07-24 21:54:49.611433] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:10.513 21:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.513 21:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:10.513 21:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.513 21:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:10.513 NULL1 00:07:10.513 21:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.513 21:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:10.513 21:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.513 21:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:10.513 Delay0 00:07:10.513 21:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.513 21:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.513 21:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.513 21:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:10.513 21:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.513 21:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2533814 00:07:10.513 21:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:10.513 21:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:10.513 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.513 [2024-07-24 21:54:49.696016] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:13.054 21:54:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:13.054 21:54:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.054 21:54:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:13.054 Write completed with error (sct=0, sc=8) 00:07:13.054 starting I/O failed: -6 00:07:13.054 Read completed with error (sct=0, sc=8) 00:07:13.054 Read completed with error (sct=0, sc=8) 00:07:13.054 Write completed with error (sct=0, sc=8) 00:07:13.054 Write completed with error (sct=0, sc=8) 00:07:13.054 starting I/O failed: -6 00:07:13.054 Read completed with error (sct=0, sc=8) 00:07:13.054 Read completed with error (sct=0, sc=8) 00:07:13.054 Read completed with error (sct=0, sc=8) 00:07:13.054 Read completed with error (sct=0, sc=8) 00:07:13.054 starting I/O failed: -6 00:07:13.054 Read completed with error (sct=0, sc=8) 00:07:13.054 Write completed with error (sct=0, sc=8) 00:07:13.054 Write completed with error (sct=0, sc=8) 00:07:13.054 Read completed with error (sct=0, sc=8) 00:07:13.054 starting I/O failed: -6 00:07:13.054 Read completed with error (sct=0, sc=8) 00:07:13.054 Write completed with error (sct=0, sc=8) 00:07:13.054 Read completed with error (sct=0, sc=8) 00:07:13.054 Read completed with error (sct=0, sc=8) 00:07:13.054 starting I/O failed: -6 00:07:13.054 Read completed with error (sct=0, sc=8) 00:07:13.054 Write completed with error (sct=0, sc=8) 00:07:13.054 Write completed with error (sct=0, sc=8) 00:07:13.054 Write completed with error (sct=0, sc=8) 00:07:13.054 starting I/O failed: -6 00:07:13.054 Write completed with error (sct=0, sc=8) 00:07:13.054 Read completed with error (sct=0, sc=8) 00:07:13.054 Read completed with error (sct=0, sc=8) 00:07:13.054 Read completed with error (sct=0, sc=8) 00:07:13.054 starting I/O failed: -6 00:07:13.054 Read completed with error (sct=0, sc=8) 00:07:13.054 Write completed with error (sct=0, sc=8) 00:07:13.054 Write completed with error (sct=0, sc=8) 00:07:13.054 Read completed with error (sct=0, sc=8) 00:07:13.054 starting I/O failed: -6 00:07:13.054 Read completed with error (sct=0, sc=8) 00:07:13.054 Read completed with error (sct=0, sc=8) 00:07:13.054 Read completed with error (sct=0, sc=8) 00:07:13.054 Write completed with error (sct=0, sc=8) 00:07:13.054 starting I/O failed: -6 00:07:13.054 Read completed with error (sct=0, sc=8) 00:07:13.054 Read completed with error (sct=0, sc=8) 00:07:13.054 Read completed with error (sct=0, sc=8) 00:07:13.054 Read completed with error (sct=0, sc=8) 00:07:13.054 starting I/O failed: -6 00:07:13.054 [2024-07-24 21:54:51.914021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4ad400d000 is same with the state(5) to be set 00:07:13.054 Write completed with error (sct=0, sc=8) 00:07:13.054 Write completed with error (sct=0, sc=8) 00:07:13.054 Read completed with error (sct=0, sc=8) 00:07:13.054 Read completed with error (sct=0, sc=8) 00:07:13.054 Read completed with error (sct=0, sc=8) 00:07:13.054 Write completed with error (sct=0, sc=8) 00:07:13.054 Read completed with error (sct=0, sc=8) 00:07:13.054 Write completed with error (sct=0, sc=8) 00:07:13.054 Read completed with error (sct=0, sc=8) 00:07:13.054 Read completed with error (sct=0, sc=8) 00:07:13.054 Read completed with error (sct=0, sc=8) 00:07:13.054 Read completed with error (sct=0, sc=8) 00:07:13.054 Write completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Write completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 starting I/O failed: -6 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Write completed with error (sct=0, sc=8) 00:07:13.055 starting I/O failed: -6 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Write completed with error (sct=0, sc=8) 00:07:13.055 Write completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Write completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 starting I/O failed: -6 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Write completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Write completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 starting I/O failed: -6 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Write completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Write completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Write completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 starting I/O failed: -6 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Write completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 starting I/O failed: -6 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Write completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Write completed with error (sct=0, sc=8) 00:07:13.055 Write completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Write completed with error (sct=0, sc=8) 00:07:13.055 starting I/O failed: -6 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Write completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Write completed with error (sct=0, sc=8) 00:07:13.055 Write completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Write completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Write completed with error (sct=0, sc=8) 00:07:13.055 starting I/O failed: -6 00:07:13.055 Write completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Write completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Write completed with error (sct=0, sc=8) 00:07:13.055 Write completed with error (sct=0, sc=8) 00:07:13.055 Write completed with error (sct=0, sc=8) 00:07:13.055 starting I/O failed: -6 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Write completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Write completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 starting I/O failed: -6 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Write completed with error (sct=0, sc=8) 00:07:13.055 Write completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Write completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 starting I/O failed: -6 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Write completed with error (sct=0, sc=8) 00:07:13.055 Write completed with error (sct=0, sc=8) 00:07:13.055 Write completed with error (sct=0, sc=8) 00:07:13.055 Write completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Write completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 starting I/O failed: -6 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Write completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Write completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Write completed with error (sct=0, sc=8) 00:07:13.055 Write completed with error (sct=0, sc=8) 00:07:13.055 Write completed with error (sct=0, sc=8) 00:07:13.055 Write completed with error (sct=0, sc=8) 00:07:13.055 starting I/O failed: -6 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Write completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 Read completed with error (sct=0, sc=8) 00:07:13.055 [2024-07-24 21:54:51.914764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e710 is same with the state(5) to be set 00:07:13.993 [2024-07-24 21:54:52.876390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fe450 is same with the state(5) to be set 00:07:13.993 Read completed with error (sct=0, sc=8) 00:07:13.993 Read completed with error (sct=0, sc=8) 00:07:13.993 Read completed with error (sct=0, sc=8) 00:07:13.993 Read completed with error (sct=0, sc=8) 00:07:13.993 Read completed with error (sct=0, sc=8) 00:07:13.993 Read completed with error (sct=0, sc=8) 00:07:13.993 Read completed with error (sct=0, sc=8) 00:07:13.993 Write completed with error (sct=0, sc=8) 00:07:13.993 Read completed with error (sct=0, sc=8) 00:07:13.993 Write completed with error (sct=0, sc=8) 00:07:13.993 Write completed with error (sct=0, sc=8) 00:07:13.993 Write completed with error (sct=0, sc=8) 00:07:13.993 Write completed with error (sct=0, sc=8) 00:07:13.993 Read completed with error (sct=0, sc=8) 00:07:13.993 Read completed with error (sct=0, sc=8) 00:07:13.993 Read completed with error (sct=0, sc=8) 00:07:13.993 Read completed with error (sct=0, sc=8) 00:07:13.993 Read completed with error (sct=0, sc=8) 00:07:13.993 Write completed with error (sct=0, sc=8) 00:07:13.993 Read completed with error (sct=0, sc=8) 00:07:13.993 Read completed with error (sct=0, sc=8) 00:07:13.993 Write completed with error (sct=0, sc=8) 00:07:13.993 Read completed with error (sct=0, sc=8) 00:07:13.993 Write completed with error (sct=0, sc=8) 00:07:13.993 Read completed with error (sct=0, sc=8) 00:07:13.993 Write completed with error (sct=0, sc=8) 00:07:13.993 Read completed with error (sct=0, sc=8) 00:07:13.993 Write completed with error (sct=0, sc=8) 00:07:13.993 Write completed with error (sct=0, sc=8) 00:07:13.993 Read completed with error (sct=0, sc=8) 00:07:13.993 Read completed with error (sct=0, sc=8) 00:07:13.993 Write completed with error (sct=0, sc=8) 00:07:13.993 Read completed with error (sct=0, sc=8) 00:07:13.993 Read completed with error (sct=0, sc=8) 00:07:13.993 Read completed with error (sct=0, sc=8) 00:07:13.993 Read completed with error (sct=0, sc=8) 00:07:13.993 Write completed with error (sct=0, sc=8) 00:07:13.993 [2024-07-24 21:54:52.916411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fdaf0 is same with the state(5) to be set 00:07:13.993 Write completed with error (sct=0, sc=8) 00:07:13.993 Write completed with error (sct=0, sc=8) 00:07:13.993 Read completed with error (sct=0, sc=8) 00:07:13.993 Read completed with error (sct=0, sc=8) 00:07:13.993 Write completed with error (sct=0, sc=8) 00:07:13.993 Read completed with error (sct=0, sc=8) 00:07:13.993 Read completed with error (sct=0, sc=8) 00:07:13.993 Read completed with error (sct=0, sc=8) 00:07:13.993 Write completed with error (sct=0, sc=8) 00:07:13.993 Write completed with error (sct=0, sc=8) 00:07:13.993 Read completed with error (sct=0, sc=8) 00:07:13.993 Read completed with error (sct=0, sc=8) 00:07:13.993 Read completed with error (sct=0, sc=8) 00:07:13.993 Read completed with error (sct=0, sc=8) 00:07:13.993 Read completed with error (sct=0, sc=8) 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 Write completed with error (sct=0, sc=8) 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 Write completed with error (sct=0, sc=8) 00:07:13.994 Write completed with error (sct=0, sc=8) 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 Write completed with error (sct=0, sc=8) 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 Write completed with error (sct=0, sc=8) 00:07:13.994 [2024-07-24 21:54:52.916596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ea40 is same with the state(5) to be set 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 Write completed with error (sct=0, sc=8) 00:07:13.994 Write completed with error (sct=0, sc=8) 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 Write completed with error (sct=0, sc=8) 00:07:13.994 Write completed with error (sct=0, sc=8) 00:07:13.994 Write completed with error (sct=0, sc=8) 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 Write completed with error (sct=0, sc=8) 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 Write completed with error (sct=0, sc=8) 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 Write completed with error (sct=0, sc=8) 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 Write completed with error (sct=0, sc=8) 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 Write completed with error (sct=0, sc=8) 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 Write completed with error (sct=0, sc=8) 00:07:13.994 Write completed with error (sct=0, sc=8) 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 Write completed with error (sct=0, sc=8) 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 [2024-07-24 21:54:52.916798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fd910 is same with the state(5) to be set 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 Write completed with error (sct=0, sc=8) 00:07:13.994 Read completed with error (sct=0, sc=8) 00:07:13.994 [2024-07-24 21:54:52.916886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4ad400d330 is same with the state(5) to be set 00:07:13.994 Initializing NVMe Controllers 00:07:13.994 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:13.994 Controller IO queue size 128, less than required. 00:07:13.994 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:13.994 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:13.994 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:13.994 Initialization complete. Launching workers. 00:07:13.994 ======================================================== 00:07:13.994 Latency(us) 00:07:13.994 Device Information : IOPS MiB/s Average min max 00:07:13.994 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 184.20 0.09 983100.83 706.01 2001816.68 00:07:13.994 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 150.94 0.07 893946.89 396.29 1011981.93 00:07:13.994 ======================================================== 00:07:13.994 Total : 335.14 0.16 942948.53 396.29 2001816.68 00:07:13.994 00:07:13.994 [2024-07-24 21:54:52.917810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fe450 (9): Bad file descriptor 00:07:13.994 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:13.994 21:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.994 21:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:13.994 21:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2533814 00:07:13.994 21:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:14.254 21:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:14.254 21:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2533814 00:07:14.254 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2533814) - No such process 00:07:14.254 21:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2533814 00:07:14.254 21:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:07:14.254 21:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2533814 00:07:14.254 21:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:07:14.254 21:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:14.254 21:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:07:14.254 21:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:14.254 21:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 2533814 00:07:14.254 21:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:07:14.254 21:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:14.254 21:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:14.254 21:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:14.254 21:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:14.254 21:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.254 21:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:14.254 21:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.254 21:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:14.254 21:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.254 21:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:14.254 [2024-07-24 21:54:53.442771] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:14.254 21:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.254 21:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.254 21:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.254 21:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:14.254 21:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.254 21:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2534563 00:07:14.254 21:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:14.254 21:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:14.254 21:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2534563 00:07:14.254 21:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:14.513 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.513 [2024-07-24 21:54:53.508180] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:14.772 21:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:14.772 21:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2534563 00:07:14.772 21:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:15.340 21:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:15.340 21:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2534563 00:07:15.340 21:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:15.907 21:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:15.907 21:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2534563 00:07:15.907 21:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:16.476 21:54:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:16.476 21:54:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2534563 00:07:16.476 21:54:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:17.044 21:54:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:17.045 21:54:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2534563 00:07:17.045 21:54:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:17.304 21:54:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:17.304 21:54:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2534563 00:07:17.304 21:54:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:17.577 Initializing NVMe Controllers 00:07:17.577 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:17.577 Controller IO queue size 128, less than required. 00:07:17.577 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:17.577 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:17.577 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:17.577 Initialization complete. Launching workers. 00:07:17.577 ======================================================== 00:07:17.577 Latency(us) 00:07:17.577 Device Information : IOPS MiB/s Average min max 00:07:17.578 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003029.08 1000227.31 1009369.64 00:07:17.578 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004440.45 1000228.00 1011297.24 00:07:17.578 ======================================================== 00:07:17.578 Total : 256.00 0.12 1003734.76 1000227.31 1011297.24 00:07:17.578 00:07:17.837 21:54:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:17.837 21:54:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2534563 00:07:17.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2534563) - No such process 00:07:17.837 21:54:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2534563 00:07:17.837 21:54:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:17.837 21:54:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:17.837 21:54:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:17.837 21:54:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:07:17.837 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:17.837 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:07:17.837 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:17.837 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:17.837 rmmod nvme_tcp 00:07:17.837 rmmod nvme_fabrics 00:07:17.837 rmmod nvme_keyring 00:07:18.097 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:18.097 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:07:18.097 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:07:18.097 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2533735 ']' 00:07:18.097 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2533735 00:07:18.097 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 2533735 ']' 00:07:18.097 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 2533735 00:07:18.097 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:07:18.097 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:18.097 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2533735 00:07:18.097 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:18.097 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:18.097 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2533735' 00:07:18.097 killing process with pid 2533735 00:07:18.097 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 2533735 00:07:18.097 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 2533735 00:07:18.356 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:18.356 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:18.356 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:18.356 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:18.356 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:18.356 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:18.356 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:18.356 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:20.264 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:20.264 00:07:20.264 real 0m17.110s 00:07:20.264 user 0m29.722s 00:07:20.264 sys 0m6.519s 00:07:20.264 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.264 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:20.264 ************************************ 00:07:20.264 END TEST nvmf_delete_subsystem 00:07:20.264 ************************************ 00:07:20.264 21:54:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:20.264 21:54:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:20.264 21:54:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.264 21:54:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:20.524 ************************************ 00:07:20.524 START TEST nvmf_host_management 00:07:20.524 ************************************ 00:07:20.524 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:20.524 * Looking for test storage... 00:07:20.524 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:20.524 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:20.524 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:20.524 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:20.524 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:20.524 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:20.524 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:20.524 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:20.524 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:20.524 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:20.524 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:20.524 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:20.524 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:20.524 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:07:20.524 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:07:20.524 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:20.524 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:20.524 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:20.524 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:20.524 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:20.524 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:20.524 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:20.524 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:20.524 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.524 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.524 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.524 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:20.524 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.524 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:07:20.525 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:20.525 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:20.525 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:20.525 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:20.525 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:20.525 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:20.525 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:20.525 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:20.525 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:20.525 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:20.525 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:20.525 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:20.525 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:20.525 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:20.525 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:20.525 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:20.525 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:20.525 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:20.525 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:20.525 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:20.525 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:20.525 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:07:20.525 21:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:27.129 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:27.129 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:27.129 Found net devices under 0000:af:00.0: cvl_0_0 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:27.129 Found net devices under 0000:af:00.1: cvl_0_1 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:27.129 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:27.130 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:27.130 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:27.130 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:27.130 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:27.130 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:27.130 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:27.130 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:27.130 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:27.130 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:27.130 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:27.130 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:27.130 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:27.130 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:27.130 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:27.130 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:27.130 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:27.130 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:27.130 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:27.130 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:27.130 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:27.130 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:07:27.130 00:07:27.130 --- 10.0.0.2 ping statistics --- 00:07:27.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:27.130 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:07:27.130 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:27.130 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:27.130 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:07:27.130 00:07:27.130 --- 10.0.0.1 ping statistics --- 00:07:27.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:27.130 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:07:27.130 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:27.130 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:07:27.130 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:27.130 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:27.130 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:27.130 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:27.130 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:27.130 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:27.130 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:27.130 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:27.130 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:27.130 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:27.130 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:27.130 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:27.130 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:27.130 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2538785 00:07:27.130 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2538785 00:07:27.130 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:27.130 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2538785 ']' 00:07:27.130 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.130 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:27.130 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.130 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:27.130 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:27.130 [2024-07-24 21:55:06.257061] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:07:27.130 [2024-07-24 21:55:06.257111] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:27.130 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.130 [2024-07-24 21:55:06.330955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:27.389 [2024-07-24 21:55:06.405985] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:27.389 [2024-07-24 21:55:06.406024] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:27.389 [2024-07-24 21:55:06.406034] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:27.389 [2024-07-24 21:55:06.406043] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:27.389 [2024-07-24 21:55:06.406050] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:27.389 [2024-07-24 21:55:06.406151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.389 [2024-07-24 21:55:06.406238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:27.389 [2024-07-24 21:55:06.406346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.389 [2024-07-24 21:55:06.406347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:27.957 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:27.957 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:27.957 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:27.957 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:27.957 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:27.957 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:27.957 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:27.957 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.957 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:27.957 [2024-07-24 21:55:07.103863] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:27.957 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.957 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:27.957 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:27.957 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:27.957 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:27.957 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:27.957 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:27.957 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.957 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:27.957 Malloc0 00:07:28.216 [2024-07-24 21:55:07.170709] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:28.216 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.216 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:28.216 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:28.216 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:28.216 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2539088 00:07:28.216 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2539088 /var/tmp/bdevperf.sock 00:07:28.216 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2539088 ']' 00:07:28.216 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:28.216 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:28.216 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:28.216 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:28.216 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:28.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:28.216 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:28.216 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:28.216 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:28.216 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:28.216 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:28.216 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:28.216 { 00:07:28.216 "params": { 00:07:28.216 "name": "Nvme$subsystem", 00:07:28.216 "trtype": "$TEST_TRANSPORT", 00:07:28.216 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:28.216 "adrfam": "ipv4", 00:07:28.216 "trsvcid": "$NVMF_PORT", 00:07:28.216 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:28.216 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:28.216 "hdgst": ${hdgst:-false}, 00:07:28.216 "ddgst": ${ddgst:-false} 00:07:28.216 }, 00:07:28.216 "method": "bdev_nvme_attach_controller" 00:07:28.216 } 00:07:28.216 EOF 00:07:28.216 )") 00:07:28.216 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:28.216 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:28.216 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:28.216 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:28.216 "params": { 00:07:28.216 "name": "Nvme0", 00:07:28.216 "trtype": "tcp", 00:07:28.216 "traddr": "10.0.0.2", 00:07:28.216 "adrfam": "ipv4", 00:07:28.216 "trsvcid": "4420", 00:07:28.216 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:28.216 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:28.216 "hdgst": false, 00:07:28.216 "ddgst": false 00:07:28.216 }, 00:07:28.216 "method": "bdev_nvme_attach_controller" 00:07:28.216 }' 00:07:28.216 [2024-07-24 21:55:07.275630] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:07:28.216 [2024-07-24 21:55:07.275678] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2539088 ] 00:07:28.216 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.216 [2024-07-24 21:55:07.346464] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.216 [2024-07-24 21:55:07.414435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.785 Running I/O for 10 seconds... 00:07:29.045 21:55:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:29.045 21:55:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:29.045 21:55:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:29.045 21:55:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.045 21:55:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:29.045 21:55:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.045 21:55:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:29.045 21:55:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:29.045 21:55:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:29.045 21:55:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:29.045 21:55:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:29.046 21:55:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:29.046 21:55:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:29.046 21:55:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:29.046 21:55:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:29.046 21:55:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:29.046 21:55:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.046 21:55:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:29.046 21:55:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.046 21:55:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:07:29.046 21:55:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:07:29.046 21:55:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:29.046 21:55:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:29.046 21:55:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:29.046 21:55:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:29.046 21:55:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.046 21:55:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:29.046 [2024-07-24 21:55:08.146096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146144] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146154] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146173] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146182] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146191] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146208] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146221] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146230] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146239] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146248] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146257] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146267] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146275] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146285] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146295] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146304] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146313] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146322] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146330] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146339] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146348] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146357] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146366] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146374] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146383] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146391] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146399] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146408] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146416] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146425] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146434] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146452] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146479] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146487] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146496] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146505] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146513] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146523] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146532] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146550] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146559] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146576] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146585] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.146594] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06990 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.150210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:29.046 [2024-07-24 21:55:08.150250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.046 [2024-07-24 21:55:08.150263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:29.046 [2024-07-24 21:55:08.150273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.046 [2024-07-24 21:55:08.150282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:29.046 [2024-07-24 21:55:08.150292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.046 [2024-07-24 21:55:08.150303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:29.046 [2024-07-24 21:55:08.150314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.046 [2024-07-24 21:55:08.150324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa5a70 is same with the state(5) to be set 00:07:29.046 [2024-07-24 21:55:08.150377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.046 [2024-07-24 21:55:08.150393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.046 [2024-07-24 21:55:08.150410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.046 [2024-07-24 21:55:08.150422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.046 [2024-07-24 21:55:08.150434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.047 [2024-07-24 21:55:08.150445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.047 [2024-07-24 21:55:08.150457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.047 [2024-07-24 21:55:08.150467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.047 [2024-07-24 21:55:08.150478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.047 [2024-07-24 21:55:08.150487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.047 [2024-07-24 21:55:08.150499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.047 [2024-07-24 21:55:08.150508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.047 [2024-07-24 21:55:08.150519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.047 [2024-07-24 21:55:08.150529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.047 [2024-07-24 21:55:08.150540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.047 [2024-07-24 21:55:08.150549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.047 [2024-07-24 21:55:08.150561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.047 [2024-07-24 21:55:08.150570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.047 [2024-07-24 21:55:08.150582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.047 [2024-07-24 21:55:08.150591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.047 [2024-07-24 21:55:08.150602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.047 [2024-07-24 21:55:08.150611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.047 [2024-07-24 21:55:08.150622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.047 [2024-07-24 21:55:08.150631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.047 [2024-07-24 21:55:08.150643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.047 [2024-07-24 21:55:08.150652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.047 [2024-07-24 21:55:08.150664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.047 [2024-07-24 21:55:08.150675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.047 [2024-07-24 21:55:08.150686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.047 [2024-07-24 21:55:08.150696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.047 [2024-07-24 21:55:08.150707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.047 [2024-07-24 21:55:08.150724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.047 [2024-07-24 21:55:08.150735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.047 [2024-07-24 21:55:08.150746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.047 [2024-07-24 21:55:08.150757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.047 [2024-07-24 21:55:08.150768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.047 [2024-07-24 21:55:08.150780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.047 [2024-07-24 21:55:08.150790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.047 [2024-07-24 21:55:08.150801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.047 [2024-07-24 21:55:08.150811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.047 [2024-07-24 21:55:08.150822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.047 [2024-07-24 21:55:08.150832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.047 21:55:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.047 [2024-07-24 21:55:08.150843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.047 [2024-07-24 21:55:08.150853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.047 [2024-07-24 21:55:08.150864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.047 [2024-07-24 21:55:08.150874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.047 [2024-07-24 21:55:08.150884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.047 [2024-07-24 21:55:08.150894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.047 [2024-07-24 21:55:08.150905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.047 [2024-07-24 21:55:08.150915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.047 [2024-07-24 21:55:08.150926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.047 [2024-07-24 21:55:08.150936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.047 [2024-07-24 21:55:08.150948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.047 [2024-07-24 21:55:08.150957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.047 [2024-07-24 21:55:08.150968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.047 [2024-07-24 21:55:08.150977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.047 [2024-07-24 21:55:08.150988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.047 [2024-07-24 21:55:08.150998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.047 [2024-07-24 21:55:08.151009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.047 [2024-07-24 21:55:08.151018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.047 [2024-07-24 21:55:08.151029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.047 [2024-07-24 21:55:08.151039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.047 [2024-07-24 21:55:08.151050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.047 [2024-07-24 21:55:08.151060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.047 [2024-07-24 21:55:08.151071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.047 [2024-07-24 21:55:08.151080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.047 [2024-07-24 21:55:08.151091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.047 [2024-07-24 21:55:08.151102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.047 [2024-07-24 21:55:08.151113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.047 [2024-07-24 21:55:08.151122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.047 [2024-07-24 21:55:08.151133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.047 [2024-07-24 21:55:08.151143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.047 [2024-07-24 21:55:08.151154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.047 [2024-07-24 21:55:08.151164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.048 [2024-07-24 21:55:08.151175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.048 [2024-07-24 21:55:08.151184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.048 [2024-07-24 21:55:08.151196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.048 [2024-07-24 21:55:08.151205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.048 21:55:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:29.048 [2024-07-24 21:55:08.151216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.048 [2024-07-24 21:55:08.151226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.048 [2024-07-24 21:55:08.151236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.048 [2024-07-24 21:55:08.151246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.048 [2024-07-24 21:55:08.151256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.048 [2024-07-24 21:55:08.151266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.048 [2024-07-24 21:55:08.151277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.048 [2024-07-24 21:55:08.151286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.048 [2024-07-24 21:55:08.151297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.048 [2024-07-24 21:55:08.151307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.048 [2024-07-24 21:55:08.151318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.048 [2024-07-24 21:55:08.151328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.048 [2024-07-24 21:55:08.151339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.048 [2024-07-24 21:55:08.151348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.048 [2024-07-24 21:55:08.151359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.048 [2024-07-24 21:55:08.151368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.048 [2024-07-24 21:55:08.151380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.048 [2024-07-24 21:55:08.151389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.048 [2024-07-24 21:55:08.151401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.048 [2024-07-24 21:55:08.151410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.048 [2024-07-24 21:55:08.151421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.048 [2024-07-24 21:55:08.151430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.048 [2024-07-24 21:55:08.151442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.048 [2024-07-24 21:55:08.151453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.048 [2024-07-24 21:55:08.151464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.048 [2024-07-24 21:55:08.151473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.048 [2024-07-24 21:55:08.151486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.048 [2024-07-24 21:55:08.151496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.048 [2024-07-24 21:55:08.151507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.048 21:55:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.048 [2024-07-24 21:55:08.151516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.048 [2024-07-24 21:55:08.151529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.048 [2024-07-24 21:55:08.151538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.048 [2024-07-24 21:55:08.151549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.048 [2024-07-24 21:55:08.151559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.048 [2024-07-24 21:55:08.151569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.048 [2024-07-24 21:55:08.151579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.048 [2024-07-24 21:55:08.151589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.048 [2024-07-24 21:55:08.151599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.048 [2024-07-24 21:55:08.151610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.048 [2024-07-24 21:55:08.151619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.048 [2024-07-24 21:55:08.151629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.048 [2024-07-24 21:55:08.151639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.048 [2024-07-24 21:55:08.151649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.048 [2024-07-24 21:55:08.151659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.048 [2024-07-24 21:55:08.151669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.048 [2024-07-24 21:55:08.151678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.048 [2024-07-24 21:55:08.151690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.048 [2024-07-24 21:55:08.151700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.048 [2024-07-24 21:55:08.151710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.048 [2024-07-24 21:55:08.151725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.048 21:55:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:29.048 [2024-07-24 21:55:08.151799] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xeb6a30 was disconnected and freed. reset controller. 00:07:29.048 [2024-07-24 21:55:08.152678] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:29.048 task offset: 84096 on job bdev=Nvme0n1 fails 00:07:29.048 00:07:29.048 Latency(us) 00:07:29.048 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:29.048 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:29.048 Job: Nvme0n1 ended in about 0.46 seconds with error 00:07:29.048 Verification LBA range: start 0x0 length 0x400 00:07:29.048 Nvme0n1 : 0.46 1437.16 89.82 140.00 0.00 39693.39 1861.22 36280.73 00:07:29.048 =================================================================================================================== 00:07:29.048 Total : 1437.16 89.82 140.00 0.00 39693.39 1861.22 36280.73 00:07:29.048 [2024-07-24 21:55:08.154204] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:29.048 [2024-07-24 21:55:08.154222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa5a70 (9): Bad file descriptor 00:07:29.048 21:55:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.048 21:55:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:29.048 [2024-07-24 21:55:08.166386] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:29.985 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2539088 00:07:29.985 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2539088) - No such process 00:07:29.985 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:29.985 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:29.985 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:29.985 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:29.985 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:29.985 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:29.985 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:29.985 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:29.985 { 00:07:29.985 "params": { 00:07:29.985 "name": "Nvme$subsystem", 00:07:29.985 "trtype": "$TEST_TRANSPORT", 00:07:29.985 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:29.985 "adrfam": "ipv4", 00:07:29.985 "trsvcid": "$NVMF_PORT", 00:07:29.985 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:29.985 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:29.985 "hdgst": ${hdgst:-false}, 00:07:29.985 "ddgst": ${ddgst:-false} 00:07:29.985 }, 00:07:29.985 "method": "bdev_nvme_attach_controller" 00:07:29.985 } 00:07:29.985 EOF 00:07:29.985 )") 00:07:29.985 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:29.985 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:29.985 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:29.985 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:29.985 "params": { 00:07:29.985 "name": "Nvme0", 00:07:29.985 "trtype": "tcp", 00:07:29.985 "traddr": "10.0.0.2", 00:07:29.985 "adrfam": "ipv4", 00:07:29.985 "trsvcid": "4420", 00:07:29.985 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:29.985 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:29.985 "hdgst": false, 00:07:29.985 "ddgst": false 00:07:29.985 }, 00:07:29.985 "method": "bdev_nvme_attach_controller" 00:07:29.985 }' 00:07:30.244 [2024-07-24 21:55:09.217942] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:07:30.244 [2024-07-24 21:55:09.217994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2539368 ] 00:07:30.244 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.244 [2024-07-24 21:55:09.287459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.244 [2024-07-24 21:55:09.353124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.502 Running I/O for 1 seconds... 00:07:31.439 00:07:31.439 Latency(us) 00:07:31.439 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:31.439 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:31.439 Verification LBA range: start 0x0 length 0x400 00:07:31.439 Nvme0n1 : 1.00 1531.61 95.73 0.00 0.00 41232.08 9384.76 38587.60 00:07:31.439 =================================================================================================================== 00:07:31.439 Total : 1531.61 95.73 0.00 0.00 41232.08 9384.76 38587.60 00:07:31.698 21:55:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:31.698 21:55:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:31.698 21:55:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:31.698 21:55:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:31.698 21:55:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:31.698 21:55:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:31.698 21:55:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:07:31.698 21:55:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:31.698 21:55:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:07:31.698 21:55:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:31.698 21:55:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:31.698 rmmod nvme_tcp 00:07:31.698 rmmod nvme_fabrics 00:07:31.698 rmmod nvme_keyring 00:07:31.698 21:55:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:31.698 21:55:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:07:31.698 21:55:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:07:31.698 21:55:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2538785 ']' 00:07:31.698 21:55:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2538785 00:07:31.698 21:55:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 2538785 ']' 00:07:31.698 21:55:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 2538785 00:07:31.698 21:55:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:07:31.698 21:55:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:31.698 21:55:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2538785 00:07:31.698 21:55:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:31.698 21:55:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:31.698 21:55:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2538785' 00:07:31.698 killing process with pid 2538785 00:07:31.698 21:55:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 2538785 00:07:31.698 21:55:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 2538785 00:07:31.957 [2024-07-24 21:55:11.054315] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:31.957 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:31.957 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:31.957 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:31.957 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:31.957 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:31.957 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.957 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:31.957 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:34.494 00:07:34.494 real 0m13.660s 00:07:34.494 user 0m22.707s 00:07:34.494 sys 0m6.340s 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:34.494 ************************************ 00:07:34.494 END TEST nvmf_host_management 00:07:34.494 ************************************ 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:34.494 ************************************ 00:07:34.494 START TEST nvmf_lvol 00:07:34.494 ************************************ 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:34.494 * Looking for test storage... 00:07:34.494 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:34.494 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:07:34.495 21:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:41.066 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:41.066 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:07:41.066 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:41.066 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:41.066 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:41.066 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:41.066 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:41.066 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:07:41.066 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:41.066 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:07:41.066 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:07:41.066 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:07:41.066 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:07:41.066 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:07:41.066 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:07:41.066 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:41.066 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:41.066 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:41.066 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:41.066 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:41.066 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:41.066 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:41.066 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:41.066 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:41.066 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:41.066 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:41.066 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:41.066 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:41.066 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:41.066 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:41.066 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:41.066 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:41.067 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:41.067 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:41.067 Found net devices under 0000:af:00.0: cvl_0_0 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:41.067 Found net devices under 0000:af:00.1: cvl_0_1 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:41.067 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:41.067 21:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:41.067 21:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:41.067 21:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:41.067 21:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:41.067 21:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:41.067 21:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:41.067 21:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:41.067 21:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:41.067 21:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:41.326 21:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:41.326 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:41.326 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:07:41.326 00:07:41.326 --- 10.0.0.2 ping statistics --- 00:07:41.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.326 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:07:41.326 21:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:41.326 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:41.326 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:07:41.326 00:07:41.326 --- 10.0.0.1 ping statistics --- 00:07:41.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.326 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:07:41.326 21:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:41.326 21:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:07:41.326 21:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:41.326 21:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:41.326 21:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:41.326 21:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:41.326 21:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:41.326 21:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:41.326 21:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:41.326 21:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:41.326 21:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:41.326 21:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:41.326 21:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:41.326 21:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2543329 00:07:41.326 21:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:41.326 21:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2543329 00:07:41.326 21:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 2543329 ']' 00:07:41.326 21:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.326 21:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:41.326 21:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.326 21:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:41.326 21:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:41.326 [2024-07-24 21:55:20.401568] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:07:41.326 [2024-07-24 21:55:20.401616] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:41.326 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.326 [2024-07-24 21:55:20.477520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:41.585 [2024-07-24 21:55:20.552061] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:41.585 [2024-07-24 21:55:20.552100] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:41.585 [2024-07-24 21:55:20.552113] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:41.585 [2024-07-24 21:55:20.552122] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:41.585 [2024-07-24 21:55:20.552129] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:41.585 [2024-07-24 21:55:20.552190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.585 [2024-07-24 21:55:20.552285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.585 [2024-07-24 21:55:20.552287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.189 21:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:42.189 21:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:07:42.189 21:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:42.189 21:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:42.189 21:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:42.189 21:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:42.189 21:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:42.189 [2024-07-24 21:55:21.396983] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:42.448 21:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:42.448 21:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:42.448 21:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:42.706 21:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:42.706 21:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:42.965 21:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:42.965 21:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=23ac1885-6068-43f6-9d12-9609978ef4dd 00:07:43.224 21:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 23ac1885-6068-43f6-9d12-9609978ef4dd lvol 20 00:07:43.224 21:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=2388ac29-5563-437c-8ffd-ee193d2a32f4 00:07:43.224 21:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:43.483 21:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2388ac29-5563-437c-8ffd-ee193d2a32f4 00:07:43.742 21:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:43.742 [2024-07-24 21:55:22.874105] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:43.742 21:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:44.001 21:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2543889 00:07:44.002 21:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:44.002 21:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:44.002 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.939 21:55:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 2388ac29-5563-437c-8ffd-ee193d2a32f4 MY_SNAPSHOT 00:07:45.199 21:55:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=8a52ca32-ca35-45b2-a4c4-79d95fd26b6a 00:07:45.199 21:55:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 2388ac29-5563-437c-8ffd-ee193d2a32f4 30 00:07:45.458 21:55:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 8a52ca32-ca35-45b2-a4c4-79d95fd26b6a MY_CLONE 00:07:45.716 21:55:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=72abeae5-7d5f-40ec-a67e-961cd8f70217 00:07:45.716 21:55:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 72abeae5-7d5f-40ec-a67e-961cd8f70217 00:07:45.976 21:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2543889 00:07:55.957 Initializing NVMe Controllers 00:07:55.957 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:55.957 Controller IO queue size 128, less than required. 00:07:55.957 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:55.957 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:55.957 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:55.957 Initialization complete. Launching workers. 00:07:55.957 ======================================================== 00:07:55.957 Latency(us) 00:07:55.957 Device Information : IOPS MiB/s Average min max 00:07:55.957 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12870.60 50.28 9949.90 1977.64 51925.28 00:07:55.957 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12751.50 49.81 10038.55 3759.94 51224.80 00:07:55.957 ======================================================== 00:07:55.957 Total : 25622.10 100.09 9994.02 1977.64 51925.28 00:07:55.957 00:07:55.957 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:55.957 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2388ac29-5563-437c-8ffd-ee193d2a32f4 00:07:55.957 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 23ac1885-6068-43f6-9d12-9609978ef4dd 00:07:55.957 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:55.957 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:55.957 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:55.957 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:55.957 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:07:55.957 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:55.957 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:07:55.957 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:55.957 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:55.957 rmmod nvme_tcp 00:07:55.957 rmmod nvme_fabrics 00:07:55.957 rmmod nvme_keyring 00:07:55.957 21:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:55.957 21:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:07:55.957 21:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:07:55.957 21:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2543329 ']' 00:07:55.957 21:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2543329 00:07:55.957 21:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 2543329 ']' 00:07:55.957 21:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 2543329 00:07:55.957 21:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:07:55.957 21:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:55.958 21:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2543329 00:07:55.958 21:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:55.958 21:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:55.958 21:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2543329' 00:07:55.958 killing process with pid 2543329 00:07:55.958 21:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 2543329 00:07:55.958 21:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 2543329 00:07:55.958 21:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:55.958 21:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:55.958 21:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:55.958 21:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:55.958 21:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:55.958 21:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.958 21:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:55.958 21:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.335 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:57.335 00:07:57.335 real 0m23.164s 00:07:57.335 user 1m2.333s 00:07:57.335 sys 0m9.972s 00:07:57.335 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:57.335 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:57.335 ************************************ 00:07:57.335 END TEST nvmf_lvol 00:07:57.335 ************************************ 00:07:57.335 21:55:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:57.335 21:55:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:57.335 21:55:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:57.335 21:55:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:57.335 ************************************ 00:07:57.335 START TEST nvmf_lvs_grow 00:07:57.335 ************************************ 00:07:57.335 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:57.595 * Looking for test storage... 00:07:57.595 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:07:57.595 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:04.168 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:04.168 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:08:04.168 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:04.168 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:04.168 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:04.168 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:04.168 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:04.168 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:08:04.168 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:04.168 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:08:04.168 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:08:04.168 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:04.169 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:04.169 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:04.169 Found net devices under 0000:af:00.0: cvl_0_0 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:04.169 Found net devices under 0000:af:00.1: cvl_0_1 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:04.169 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:04.427 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:04.427 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:04.427 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:04.427 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:04.427 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:08:04.427 00:08:04.427 --- 10.0.0.2 ping statistics --- 00:08:04.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.427 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:08:04.427 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:04.427 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:04.427 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:08:04.427 00:08:04.427 --- 10.0.0.1 ping statistics --- 00:08:04.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.427 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:08:04.427 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:04.427 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:08:04.427 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:04.427 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:04.427 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:04.427 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:04.427 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:04.427 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:04.427 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:04.427 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:04.427 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:04.427 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:04.427 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:04.427 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2549434 00:08:04.428 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2549434 00:08:04.428 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:04.428 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 2549434 ']' 00:08:04.428 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.428 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:04.428 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.428 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:04.428 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:04.428 [2024-07-24 21:55:43.522708] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:08:04.428 [2024-07-24 21:55:43.522768] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:04.428 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.428 [2024-07-24 21:55:43.597780] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.686 [2024-07-24 21:55:43.675442] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:04.686 [2024-07-24 21:55:43.675478] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:04.686 [2024-07-24 21:55:43.675487] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:04.686 [2024-07-24 21:55:43.675496] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:04.686 [2024-07-24 21:55:43.675519] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:04.686 [2024-07-24 21:55:43.675539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.255 21:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:05.255 21:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:08:05.255 21:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:05.255 21:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:05.255 21:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:05.255 21:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:05.255 21:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:05.513 [2024-07-24 21:55:44.522827] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:05.514 21:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:05.514 21:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:05.514 21:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:05.514 21:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:05.514 ************************************ 00:08:05.514 START TEST lvs_grow_clean 00:08:05.514 ************************************ 00:08:05.514 21:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:08:05.514 21:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:05.514 21:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:05.514 21:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:05.514 21:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:05.514 21:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:05.514 21:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:05.514 21:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:05.514 21:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:05.514 21:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:05.772 21:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:05.772 21:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:05.772 21:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=50e77b46-fd58-4240-98f7-bc13e572670f 00:08:05.772 21:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50e77b46-fd58-4240-98f7-bc13e572670f 00:08:05.772 21:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:06.030 21:55:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:06.030 21:55:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:06.030 21:55:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 50e77b46-fd58-4240-98f7-bc13e572670f lvol 150 00:08:06.289 21:55:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=5f6c2665-5d8c-44b5-ab99-62fcdf781c07 00:08:06.289 21:55:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:06.289 21:55:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:06.289 [2024-07-24 21:55:45.445364] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:06.290 [2024-07-24 21:55:45.445410] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:06.290 true 00:08:06.290 21:55:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50e77b46-fd58-4240-98f7-bc13e572670f 00:08:06.290 21:55:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:06.548 21:55:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:06.548 21:55:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:06.807 21:55:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5f6c2665-5d8c-44b5-ab99-62fcdf781c07 00:08:06.807 21:55:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:07.067 [2024-07-24 21:55:46.103332] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:07.067 21:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:07.327 21:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2549999 00:08:07.327 21:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:07.327 21:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:07.327 21:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2549999 /var/tmp/bdevperf.sock 00:08:07.327 21:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 2549999 ']' 00:08:07.327 21:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:07.327 21:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:07.327 21:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:07.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:07.327 21:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:07.327 21:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:07.327 [2024-07-24 21:55:46.331517] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:08:07.327 [2024-07-24 21:55:46.331567] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2549999 ] 00:08:07.327 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.327 [2024-07-24 21:55:46.400347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.327 [2024-07-24 21:55:46.468402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:08.265 21:55:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:08.265 21:55:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:08:08.265 21:55:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:08.265 Nvme0n1 00:08:08.265 21:55:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:08.524 [ 00:08:08.524 { 00:08:08.524 "name": "Nvme0n1", 00:08:08.524 "aliases": [ 00:08:08.524 "5f6c2665-5d8c-44b5-ab99-62fcdf781c07" 00:08:08.524 ], 00:08:08.524 "product_name": "NVMe disk", 00:08:08.524 "block_size": 4096, 00:08:08.524 "num_blocks": 38912, 00:08:08.524 "uuid": "5f6c2665-5d8c-44b5-ab99-62fcdf781c07", 00:08:08.524 "assigned_rate_limits": { 00:08:08.524 "rw_ios_per_sec": 0, 00:08:08.524 "rw_mbytes_per_sec": 0, 00:08:08.524 "r_mbytes_per_sec": 0, 00:08:08.524 "w_mbytes_per_sec": 0 00:08:08.524 }, 00:08:08.524 "claimed": false, 00:08:08.524 "zoned": false, 00:08:08.524 "supported_io_types": { 00:08:08.524 "read": true, 00:08:08.524 "write": true, 00:08:08.524 "unmap": true, 00:08:08.524 "flush": true, 00:08:08.524 "reset": true, 00:08:08.524 "nvme_admin": true, 00:08:08.524 "nvme_io": true, 00:08:08.524 "nvme_io_md": false, 00:08:08.524 "write_zeroes": true, 00:08:08.524 "zcopy": false, 00:08:08.524 "get_zone_info": false, 00:08:08.524 "zone_management": false, 00:08:08.524 "zone_append": false, 00:08:08.524 "compare": true, 00:08:08.524 "compare_and_write": true, 00:08:08.524 "abort": true, 00:08:08.524 "seek_hole": false, 00:08:08.524 "seek_data": false, 00:08:08.524 "copy": true, 00:08:08.525 "nvme_iov_md": false 00:08:08.525 }, 00:08:08.525 "memory_domains": [ 00:08:08.525 { 00:08:08.525 "dma_device_id": "system", 00:08:08.525 "dma_device_type": 1 00:08:08.525 } 00:08:08.525 ], 00:08:08.525 "driver_specific": { 00:08:08.525 "nvme": [ 00:08:08.525 { 00:08:08.525 "trid": { 00:08:08.525 "trtype": "TCP", 00:08:08.525 "adrfam": "IPv4", 00:08:08.525 "traddr": "10.0.0.2", 00:08:08.525 "trsvcid": "4420", 00:08:08.525 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:08.525 }, 00:08:08.525 "ctrlr_data": { 00:08:08.525 "cntlid": 1, 00:08:08.525 "vendor_id": "0x8086", 00:08:08.525 "model_number": "SPDK bdev Controller", 00:08:08.525 "serial_number": "SPDK0", 00:08:08.525 "firmware_revision": "24.09", 00:08:08.525 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:08.525 "oacs": { 00:08:08.525 "security": 0, 00:08:08.525 "format": 0, 00:08:08.525 "firmware": 0, 00:08:08.525 "ns_manage": 0 00:08:08.525 }, 00:08:08.525 "multi_ctrlr": true, 00:08:08.525 "ana_reporting": false 00:08:08.525 }, 00:08:08.525 "vs": { 00:08:08.525 "nvme_version": "1.3" 00:08:08.525 }, 00:08:08.525 "ns_data": { 00:08:08.525 "id": 1, 00:08:08.525 "can_share": true 00:08:08.525 } 00:08:08.525 } 00:08:08.525 ], 00:08:08.525 "mp_policy": "active_passive" 00:08:08.525 } 00:08:08.525 } 00:08:08.525 ] 00:08:08.525 21:55:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2550269 00:08:08.525 21:55:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:08.525 21:55:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:08.525 Running I/O for 10 seconds... 00:08:09.462 Latency(us) 00:08:09.462 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:09.462 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:09.462 Nvme0n1 : 1.00 22857.00 89.29 0.00 0.00 0.00 0.00 0.00 00:08:09.462 =================================================================================================================== 00:08:09.462 Total : 22857.00 89.29 0.00 0.00 0.00 0.00 0.00 00:08:09.462 00:08:10.399 21:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 50e77b46-fd58-4240-98f7-bc13e572670f 00:08:10.658 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:10.658 Nvme0n1 : 2.00 23088.50 90.19 0.00 0.00 0.00 0.00 0.00 00:08:10.658 =================================================================================================================== 00:08:10.658 Total : 23088.50 90.19 0.00 0.00 0.00 0.00 0.00 00:08:10.658 00:08:10.658 true 00:08:10.658 21:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50e77b46-fd58-4240-98f7-bc13e572670f 00:08:10.658 21:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:10.917 21:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:10.917 21:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:10.917 21:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2550269 00:08:11.485 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:11.485 Nvme0n1 : 3.00 23104.33 90.25 0.00 0.00 0.00 0.00 0.00 00:08:11.485 =================================================================================================================== 00:08:11.485 Total : 23104.33 90.25 0.00 0.00 0.00 0.00 0.00 00:08:11.485 00:08:12.865 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:12.865 Nvme0n1 : 4.00 23188.25 90.58 0.00 0.00 0.00 0.00 0.00 00:08:12.865 =================================================================================================================== 00:08:12.865 Total : 23188.25 90.58 0.00 0.00 0.00 0.00 0.00 00:08:12.865 00:08:13.803 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.803 Nvme0n1 : 5.00 23235.40 90.76 0.00 0.00 0.00 0.00 0.00 00:08:13.803 =================================================================================================================== 00:08:13.803 Total : 23235.40 90.76 0.00 0.00 0.00 0.00 0.00 00:08:13.803 00:08:14.739 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.739 Nvme0n1 : 6.00 23281.50 90.94 0.00 0.00 0.00 0.00 0.00 00:08:14.739 =================================================================================================================== 00:08:14.739 Total : 23281.50 90.94 0.00 0.00 0.00 0.00 0.00 00:08:14.739 00:08:15.677 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.677 Nvme0n1 : 7.00 23322.43 91.10 0.00 0.00 0.00 0.00 0.00 00:08:15.677 =================================================================================================================== 00:08:15.678 Total : 23322.43 91.10 0.00 0.00 0.00 0.00 0.00 00:08:15.678 00:08:16.615 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.615 Nvme0n1 : 8.00 23354.12 91.23 0.00 0.00 0.00 0.00 0.00 00:08:16.615 =================================================================================================================== 00:08:16.615 Total : 23354.12 91.23 0.00 0.00 0.00 0.00 0.00 00:08:16.615 00:08:17.553 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.553 Nvme0n1 : 9.00 23378.78 91.32 0.00 0.00 0.00 0.00 0.00 00:08:17.553 =================================================================================================================== 00:08:17.553 Total : 23378.78 91.32 0.00 0.00 0.00 0.00 0.00 00:08:17.553 00:08:18.491 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.491 Nvme0n1 : 10.00 23398.50 91.40 0.00 0.00 0.00 0.00 0.00 00:08:18.491 =================================================================================================================== 00:08:18.491 Total : 23398.50 91.40 0.00 0.00 0.00 0.00 0.00 00:08:18.491 00:08:18.491 00:08:18.491 Latency(us) 00:08:18.491 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:18.491 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.491 Nvme0n1 : 10.01 23398.13 91.40 0.00 0.00 5466.62 4141.88 16672.36 00:08:18.491 =================================================================================================================== 00:08:18.491 Total : 23398.13 91.40 0.00 0.00 5466.62 4141.88 16672.36 00:08:18.491 0 00:08:18.754 21:55:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2549999 00:08:18.754 21:55:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 2549999 ']' 00:08:18.754 21:55:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 2549999 00:08:18.754 21:55:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:08:18.754 21:55:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:18.754 21:55:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2549999 00:08:18.754 21:55:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:18.754 21:55:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:18.754 21:55:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2549999' 00:08:18.754 killing process with pid 2549999 00:08:18.754 21:55:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 2549999 00:08:18.754 Received shutdown signal, test time was about 10.000000 seconds 00:08:18.754 00:08:18.754 Latency(us) 00:08:18.754 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:18.754 =================================================================================================================== 00:08:18.754 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:18.754 21:55:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 2549999 00:08:18.754 21:55:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:19.013 21:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:19.273 21:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50e77b46-fd58-4240-98f7-bc13e572670f 00:08:19.273 21:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:19.273 21:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:19.273 21:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:19.273 21:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:19.532 [2024-07-24 21:55:58.635242] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:19.532 21:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50e77b46-fd58-4240-98f7-bc13e572670f 00:08:19.532 21:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:19.532 21:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50e77b46-fd58-4240-98f7-bc13e572670f 00:08:19.532 21:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:19.532 21:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.532 21:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:19.532 21:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.532 21:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:19.532 21:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.532 21:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:19.532 21:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:19.532 21:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50e77b46-fd58-4240-98f7-bc13e572670f 00:08:19.792 request: 00:08:19.792 { 00:08:19.792 "uuid": "50e77b46-fd58-4240-98f7-bc13e572670f", 00:08:19.792 "method": "bdev_lvol_get_lvstores", 00:08:19.792 "req_id": 1 00:08:19.792 } 00:08:19.792 Got JSON-RPC error response 00:08:19.792 response: 00:08:19.792 { 00:08:19.792 "code": -19, 00:08:19.792 "message": "No such device" 00:08:19.792 } 00:08:19.792 21:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:19.792 21:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:19.792 21:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:19.792 21:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:19.792 21:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:19.792 aio_bdev 00:08:20.051 21:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5f6c2665-5d8c-44b5-ab99-62fcdf781c07 00:08:20.051 21:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=5f6c2665-5d8c-44b5-ab99-62fcdf781c07 00:08:20.051 21:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:20.051 21:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:08:20.051 21:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:20.051 21:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:20.051 21:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:20.051 21:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5f6c2665-5d8c-44b5-ab99-62fcdf781c07 -t 2000 00:08:20.310 [ 00:08:20.310 { 00:08:20.310 "name": "5f6c2665-5d8c-44b5-ab99-62fcdf781c07", 00:08:20.310 "aliases": [ 00:08:20.310 "lvs/lvol" 00:08:20.310 ], 00:08:20.310 "product_name": "Logical Volume", 00:08:20.310 "block_size": 4096, 00:08:20.310 "num_blocks": 38912, 00:08:20.310 "uuid": "5f6c2665-5d8c-44b5-ab99-62fcdf781c07", 00:08:20.310 "assigned_rate_limits": { 00:08:20.310 "rw_ios_per_sec": 0, 00:08:20.310 "rw_mbytes_per_sec": 0, 00:08:20.310 "r_mbytes_per_sec": 0, 00:08:20.310 "w_mbytes_per_sec": 0 00:08:20.310 }, 00:08:20.310 "claimed": false, 00:08:20.310 "zoned": false, 00:08:20.310 "supported_io_types": { 00:08:20.310 "read": true, 00:08:20.310 "write": true, 00:08:20.310 "unmap": true, 00:08:20.310 "flush": false, 00:08:20.310 "reset": true, 00:08:20.310 "nvme_admin": false, 00:08:20.310 "nvme_io": false, 00:08:20.310 "nvme_io_md": false, 00:08:20.310 "write_zeroes": true, 00:08:20.310 "zcopy": false, 00:08:20.310 "get_zone_info": false, 00:08:20.310 "zone_management": false, 00:08:20.310 "zone_append": false, 00:08:20.310 "compare": false, 00:08:20.310 "compare_and_write": false, 00:08:20.310 "abort": false, 00:08:20.310 "seek_hole": true, 00:08:20.310 "seek_data": true, 00:08:20.310 "copy": false, 00:08:20.310 "nvme_iov_md": false 00:08:20.310 }, 00:08:20.310 "driver_specific": { 00:08:20.310 "lvol": { 00:08:20.310 "lvol_store_uuid": "50e77b46-fd58-4240-98f7-bc13e572670f", 00:08:20.310 "base_bdev": "aio_bdev", 00:08:20.310 "thin_provision": false, 00:08:20.310 "num_allocated_clusters": 38, 00:08:20.310 "snapshot": false, 00:08:20.310 "clone": false, 00:08:20.310 "esnap_clone": false 00:08:20.310 } 00:08:20.310 } 00:08:20.310 } 00:08:20.310 ] 00:08:20.310 21:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:08:20.310 21:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50e77b46-fd58-4240-98f7-bc13e572670f 00:08:20.310 21:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:20.310 21:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:20.570 21:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50e77b46-fd58-4240-98f7-bc13e572670f 00:08:20.570 21:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:20.570 21:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:20.570 21:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5f6c2665-5d8c-44b5-ab99-62fcdf781c07 00:08:20.829 21:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 50e77b46-fd58-4240-98f7-bc13e572670f 00:08:20.829 21:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:21.088 21:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:21.088 00:08:21.088 real 0m15.644s 00:08:21.088 user 0m14.724s 00:08:21.088 sys 0m2.048s 00:08:21.088 21:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:21.088 21:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:21.088 ************************************ 00:08:21.088 END TEST lvs_grow_clean 00:08:21.088 ************************************ 00:08:21.088 21:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:21.088 21:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:21.088 21:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:21.088 21:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:21.347 ************************************ 00:08:21.347 START TEST lvs_grow_dirty 00:08:21.347 ************************************ 00:08:21.347 21:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:08:21.347 21:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:21.347 21:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:21.348 21:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:21.348 21:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:21.348 21:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:21.348 21:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:21.348 21:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:21.348 21:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:21.348 21:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:21.348 21:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:21.348 21:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:21.607 21:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=afa2ec6f-94d8-46f5-8ea0-19c202ba724d 00:08:21.607 21:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u afa2ec6f-94d8-46f5-8ea0-19c202ba724d 00:08:21.607 21:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:21.866 21:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:21.866 21:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:21.866 21:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u afa2ec6f-94d8-46f5-8ea0-19c202ba724d lvol 150 00:08:21.866 21:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=55431f19-8eea-4de1-a6a6-370cb1e6df4f 00:08:21.866 21:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:21.866 21:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:22.126 [2024-07-24 21:56:01.168412] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:22.126 [2024-07-24 21:56:01.168462] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:22.126 true 00:08:22.126 21:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u afa2ec6f-94d8-46f5-8ea0-19c202ba724d 00:08:22.126 21:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:22.388 21:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:22.388 21:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:22.388 21:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 55431f19-8eea-4de1-a6a6-370cb1e6df4f 00:08:22.648 21:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:22.648 [2024-07-24 21:56:01.838393] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:22.648 21:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:22.907 21:56:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2552728 00:08:22.907 21:56:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:22.907 21:56:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:22.907 21:56:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2552728 /var/tmp/bdevperf.sock 00:08:22.907 21:56:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2552728 ']' 00:08:22.907 21:56:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:22.907 21:56:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:22.907 21:56:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:22.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:22.907 21:56:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:22.907 21:56:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:22.907 [2024-07-24 21:56:02.054987] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:08:22.907 [2024-07-24 21:56:02.055042] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2552728 ] 00:08:22.907 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.167 [2024-07-24 21:56:02.124995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.167 [2024-07-24 21:56:02.200110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.736 21:56:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:23.736 21:56:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:23.736 21:56:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:24.305 Nvme0n1 00:08:24.305 21:56:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:24.305 [ 00:08:24.305 { 00:08:24.305 "name": "Nvme0n1", 00:08:24.305 "aliases": [ 00:08:24.305 "55431f19-8eea-4de1-a6a6-370cb1e6df4f" 00:08:24.305 ], 00:08:24.305 "product_name": "NVMe disk", 00:08:24.305 "block_size": 4096, 00:08:24.305 "num_blocks": 38912, 00:08:24.305 "uuid": "55431f19-8eea-4de1-a6a6-370cb1e6df4f", 00:08:24.305 "assigned_rate_limits": { 00:08:24.305 "rw_ios_per_sec": 0, 00:08:24.305 "rw_mbytes_per_sec": 0, 00:08:24.305 "r_mbytes_per_sec": 0, 00:08:24.305 "w_mbytes_per_sec": 0 00:08:24.305 }, 00:08:24.305 "claimed": false, 00:08:24.305 "zoned": false, 00:08:24.305 "supported_io_types": { 00:08:24.305 "read": true, 00:08:24.305 "write": true, 00:08:24.305 "unmap": true, 00:08:24.305 "flush": true, 00:08:24.305 "reset": true, 00:08:24.305 "nvme_admin": true, 00:08:24.305 "nvme_io": true, 00:08:24.305 "nvme_io_md": false, 00:08:24.305 "write_zeroes": true, 00:08:24.305 "zcopy": false, 00:08:24.305 "get_zone_info": false, 00:08:24.305 "zone_management": false, 00:08:24.305 "zone_append": false, 00:08:24.305 "compare": true, 00:08:24.305 "compare_and_write": true, 00:08:24.305 "abort": true, 00:08:24.305 "seek_hole": false, 00:08:24.305 "seek_data": false, 00:08:24.305 "copy": true, 00:08:24.305 "nvme_iov_md": false 00:08:24.305 }, 00:08:24.305 "memory_domains": [ 00:08:24.305 { 00:08:24.305 "dma_device_id": "system", 00:08:24.305 "dma_device_type": 1 00:08:24.305 } 00:08:24.305 ], 00:08:24.305 "driver_specific": { 00:08:24.305 "nvme": [ 00:08:24.305 { 00:08:24.305 "trid": { 00:08:24.305 "trtype": "TCP", 00:08:24.305 "adrfam": "IPv4", 00:08:24.305 "traddr": "10.0.0.2", 00:08:24.305 "trsvcid": "4420", 00:08:24.305 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:24.305 }, 00:08:24.305 "ctrlr_data": { 00:08:24.305 "cntlid": 1, 00:08:24.305 "vendor_id": "0x8086", 00:08:24.305 "model_number": "SPDK bdev Controller", 00:08:24.305 "serial_number": "SPDK0", 00:08:24.305 "firmware_revision": "24.09", 00:08:24.305 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:24.305 "oacs": { 00:08:24.305 "security": 0, 00:08:24.305 "format": 0, 00:08:24.305 "firmware": 0, 00:08:24.305 "ns_manage": 0 00:08:24.305 }, 00:08:24.305 "multi_ctrlr": true, 00:08:24.305 "ana_reporting": false 00:08:24.305 }, 00:08:24.305 "vs": { 00:08:24.305 "nvme_version": "1.3" 00:08:24.305 }, 00:08:24.305 "ns_data": { 00:08:24.305 "id": 1, 00:08:24.305 "can_share": true 00:08:24.305 } 00:08:24.305 } 00:08:24.305 ], 00:08:24.305 "mp_policy": "active_passive" 00:08:24.305 } 00:08:24.305 } 00:08:24.305 ] 00:08:24.305 21:56:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2552992 00:08:24.305 21:56:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:24.305 21:56:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:24.305 Running I/O for 10 seconds... 00:08:25.682 Latency(us) 00:08:25.682 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:25.682 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.682 Nvme0n1 : 1.00 24070.00 94.02 0.00 0.00 0.00 0.00 0.00 00:08:25.682 =================================================================================================================== 00:08:25.682 Total : 24070.00 94.02 0.00 0.00 0.00 0.00 0.00 00:08:25.682 00:08:26.249 21:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u afa2ec6f-94d8-46f5-8ea0-19c202ba724d 00:08:26.509 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.509 Nvme0n1 : 2.00 24184.00 94.47 0.00 0.00 0.00 0.00 0.00 00:08:26.509 =================================================================================================================== 00:08:26.509 Total : 24184.00 94.47 0.00 0.00 0.00 0.00 0.00 00:08:26.509 00:08:26.509 true 00:08:26.509 21:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u afa2ec6f-94d8-46f5-8ea0-19c202ba724d 00:08:26.509 21:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:26.768 21:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:26.768 21:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:26.768 21:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2552992 00:08:27.333 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:27.333 Nvme0n1 : 3.00 24192.33 94.50 0.00 0.00 0.00 0.00 0.00 00:08:27.333 =================================================================================================================== 00:08:27.333 Total : 24192.33 94.50 0.00 0.00 0.00 0.00 0.00 00:08:27.333 00:08:28.708 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.708 Nvme0n1 : 4.00 24267.75 94.80 0.00 0.00 0.00 0.00 0.00 00:08:28.708 =================================================================================================================== 00:08:28.708 Total : 24267.75 94.80 0.00 0.00 0.00 0.00 0.00 00:08:28.708 00:08:29.651 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.651 Nvme0n1 : 5.00 24308.40 94.95 0.00 0.00 0.00 0.00 0.00 00:08:29.651 =================================================================================================================== 00:08:29.651 Total : 24308.40 94.95 0.00 0.00 0.00 0.00 0.00 00:08:29.651 00:08:30.588 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.588 Nvme0n1 : 6.00 24338.50 95.07 0.00 0.00 0.00 0.00 0.00 00:08:30.588 =================================================================================================================== 00:08:30.588 Total : 24338.50 95.07 0.00 0.00 0.00 0.00 0.00 00:08:30.588 00:08:31.525 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.525 Nvme0n1 : 7.00 24317.57 94.99 0.00 0.00 0.00 0.00 0.00 00:08:31.525 =================================================================================================================== 00:08:31.525 Total : 24317.57 94.99 0.00 0.00 0.00 0.00 0.00 00:08:31.525 00:08:32.461 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.461 Nvme0n1 : 8.00 24341.88 95.09 0.00 0.00 0.00 0.00 0.00 00:08:32.461 =================================================================================================================== 00:08:32.461 Total : 24341.88 95.09 0.00 0.00 0.00 0.00 0.00 00:08:32.461 00:08:33.397 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.397 Nvme0n1 : 9.00 24367.89 95.19 0.00 0.00 0.00 0.00 0.00 00:08:33.397 =================================================================================================================== 00:08:33.397 Total : 24367.89 95.19 0.00 0.00 0.00 0.00 0.00 00:08:33.397 00:08:34.335 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.335 Nvme0n1 : 10.00 24388.70 95.27 0.00 0.00 0.00 0.00 0.00 00:08:34.335 =================================================================================================================== 00:08:34.335 Total : 24388.70 95.27 0.00 0.00 0.00 0.00 0.00 00:08:34.335 00:08:34.335 00:08:34.335 Latency(us) 00:08:34.335 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:34.335 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.335 Nvme0n1 : 10.00 24387.14 95.26 0.00 0.00 5245.20 2490.37 9909.04 00:08:34.335 =================================================================================================================== 00:08:34.335 Total : 24387.14 95.26 0.00 0.00 5245.20 2490.37 9909.04 00:08:34.335 0 00:08:34.594 21:56:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2552728 00:08:34.594 21:56:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 2552728 ']' 00:08:34.594 21:56:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 2552728 00:08:34.594 21:56:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:08:34.594 21:56:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:34.594 21:56:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2552728 00:08:34.594 21:56:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:34.594 21:56:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:34.594 21:56:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2552728' 00:08:34.594 killing process with pid 2552728 00:08:34.594 21:56:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 2552728 00:08:34.594 Received shutdown signal, test time was about 10.000000 seconds 00:08:34.594 00:08:34.594 Latency(us) 00:08:34.594 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:34.594 =================================================================================================================== 00:08:34.594 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:34.594 21:56:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 2552728 00:08:34.594 21:56:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:34.853 21:56:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:35.113 21:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u afa2ec6f-94d8-46f5-8ea0-19c202ba724d 00:08:35.113 21:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:35.113 21:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:35.113 21:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:35.113 21:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2549434 00:08:35.113 21:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2549434 00:08:35.113 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2549434 Killed "${NVMF_APP[@]}" "$@" 00:08:35.372 21:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:35.372 21:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:35.372 21:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:35.372 21:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:35.372 21:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:35.372 21:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2554851 00:08:35.372 21:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2554851 00:08:35.372 21:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:35.372 21:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2554851 ']' 00:08:35.372 21:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.372 21:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:35.372 21:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.372 21:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:35.372 21:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:35.372 [2024-07-24 21:56:14.385935] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:08:35.372 [2024-07-24 21:56:14.385986] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:35.372 EAL: No free 2048 kB hugepages reported on node 1 00:08:35.372 [2024-07-24 21:56:14.460826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.372 [2024-07-24 21:56:14.532923] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:35.372 [2024-07-24 21:56:14.532957] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:35.372 [2024-07-24 21:56:14.532966] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:35.372 [2024-07-24 21:56:14.532974] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:35.372 [2024-07-24 21:56:14.532998] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:35.372 [2024-07-24 21:56:14.533018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.311 21:56:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:36.311 21:56:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:36.311 21:56:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:36.311 21:56:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:36.311 21:56:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:36.311 21:56:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:36.311 21:56:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:36.311 [2024-07-24 21:56:15.381140] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:36.311 [2024-07-24 21:56:15.381235] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:36.311 [2024-07-24 21:56:15.381262] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:36.311 21:56:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:36.311 21:56:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 55431f19-8eea-4de1-a6a6-370cb1e6df4f 00:08:36.311 21:56:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=55431f19-8eea-4de1-a6a6-370cb1e6df4f 00:08:36.311 21:56:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:36.311 21:56:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:36.311 21:56:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:36.311 21:56:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:36.311 21:56:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:36.570 21:56:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 55431f19-8eea-4de1-a6a6-370cb1e6df4f -t 2000 00:08:36.570 [ 00:08:36.570 { 00:08:36.570 "name": "55431f19-8eea-4de1-a6a6-370cb1e6df4f", 00:08:36.570 "aliases": [ 00:08:36.570 "lvs/lvol" 00:08:36.570 ], 00:08:36.570 "product_name": "Logical Volume", 00:08:36.570 "block_size": 4096, 00:08:36.570 "num_blocks": 38912, 00:08:36.570 "uuid": "55431f19-8eea-4de1-a6a6-370cb1e6df4f", 00:08:36.570 "assigned_rate_limits": { 00:08:36.570 "rw_ios_per_sec": 0, 00:08:36.570 "rw_mbytes_per_sec": 0, 00:08:36.570 "r_mbytes_per_sec": 0, 00:08:36.571 "w_mbytes_per_sec": 0 00:08:36.571 }, 00:08:36.571 "claimed": false, 00:08:36.571 "zoned": false, 00:08:36.571 "supported_io_types": { 00:08:36.571 "read": true, 00:08:36.571 "write": true, 00:08:36.571 "unmap": true, 00:08:36.571 "flush": false, 00:08:36.571 "reset": true, 00:08:36.571 "nvme_admin": false, 00:08:36.571 "nvme_io": false, 00:08:36.571 "nvme_io_md": false, 00:08:36.571 "write_zeroes": true, 00:08:36.571 "zcopy": false, 00:08:36.571 "get_zone_info": false, 00:08:36.571 "zone_management": false, 00:08:36.571 "zone_append": false, 00:08:36.571 "compare": false, 00:08:36.571 "compare_and_write": false, 00:08:36.571 "abort": false, 00:08:36.571 "seek_hole": true, 00:08:36.571 "seek_data": true, 00:08:36.571 "copy": false, 00:08:36.571 "nvme_iov_md": false 00:08:36.571 }, 00:08:36.571 "driver_specific": { 00:08:36.571 "lvol": { 00:08:36.571 "lvol_store_uuid": "afa2ec6f-94d8-46f5-8ea0-19c202ba724d", 00:08:36.571 "base_bdev": "aio_bdev", 00:08:36.571 "thin_provision": false, 00:08:36.571 "num_allocated_clusters": 38, 00:08:36.571 "snapshot": false, 00:08:36.571 "clone": false, 00:08:36.571 "esnap_clone": false 00:08:36.571 } 00:08:36.571 } 00:08:36.571 } 00:08:36.571 ] 00:08:36.571 21:56:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:36.571 21:56:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u afa2ec6f-94d8-46f5-8ea0-19c202ba724d 00:08:36.571 21:56:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:36.831 21:56:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:36.831 21:56:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u afa2ec6f-94d8-46f5-8ea0-19c202ba724d 00:08:36.831 21:56:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:37.090 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:37.090 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:37.090 [2024-07-24 21:56:16.241486] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:37.090 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u afa2ec6f-94d8-46f5-8ea0-19c202ba724d 00:08:37.090 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:37.090 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u afa2ec6f-94d8-46f5-8ea0-19c202ba724d 00:08:37.090 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:37.090 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:37.090 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:37.090 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:37.090 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:37.090 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:37.091 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:37.091 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:37.091 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u afa2ec6f-94d8-46f5-8ea0-19c202ba724d 00:08:37.350 request: 00:08:37.350 { 00:08:37.350 "uuid": "afa2ec6f-94d8-46f5-8ea0-19c202ba724d", 00:08:37.350 "method": "bdev_lvol_get_lvstores", 00:08:37.350 "req_id": 1 00:08:37.350 } 00:08:37.350 Got JSON-RPC error response 00:08:37.350 response: 00:08:37.350 { 00:08:37.350 "code": -19, 00:08:37.350 "message": "No such device" 00:08:37.350 } 00:08:37.350 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:37.350 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:37.350 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:37.350 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:37.350 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:37.609 aio_bdev 00:08:37.609 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 55431f19-8eea-4de1-a6a6-370cb1e6df4f 00:08:37.609 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=55431f19-8eea-4de1-a6a6-370cb1e6df4f 00:08:37.609 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:37.609 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:37.609 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:37.609 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:37.609 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:37.609 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 55431f19-8eea-4de1-a6a6-370cb1e6df4f -t 2000 00:08:37.868 [ 00:08:37.868 { 00:08:37.868 "name": "55431f19-8eea-4de1-a6a6-370cb1e6df4f", 00:08:37.868 "aliases": [ 00:08:37.868 "lvs/lvol" 00:08:37.868 ], 00:08:37.868 "product_name": "Logical Volume", 00:08:37.868 "block_size": 4096, 00:08:37.868 "num_blocks": 38912, 00:08:37.868 "uuid": "55431f19-8eea-4de1-a6a6-370cb1e6df4f", 00:08:37.868 "assigned_rate_limits": { 00:08:37.868 "rw_ios_per_sec": 0, 00:08:37.868 "rw_mbytes_per_sec": 0, 00:08:37.868 "r_mbytes_per_sec": 0, 00:08:37.868 "w_mbytes_per_sec": 0 00:08:37.868 }, 00:08:37.868 "claimed": false, 00:08:37.868 "zoned": false, 00:08:37.868 "supported_io_types": { 00:08:37.868 "read": true, 00:08:37.868 "write": true, 00:08:37.868 "unmap": true, 00:08:37.868 "flush": false, 00:08:37.868 "reset": true, 00:08:37.868 "nvme_admin": false, 00:08:37.868 "nvme_io": false, 00:08:37.868 "nvme_io_md": false, 00:08:37.868 "write_zeroes": true, 00:08:37.868 "zcopy": false, 00:08:37.868 "get_zone_info": false, 00:08:37.868 "zone_management": false, 00:08:37.868 "zone_append": false, 00:08:37.868 "compare": false, 00:08:37.868 "compare_and_write": false, 00:08:37.868 "abort": false, 00:08:37.868 "seek_hole": true, 00:08:37.868 "seek_data": true, 00:08:37.868 "copy": false, 00:08:37.868 "nvme_iov_md": false 00:08:37.868 }, 00:08:37.868 "driver_specific": { 00:08:37.868 "lvol": { 00:08:37.868 "lvol_store_uuid": "afa2ec6f-94d8-46f5-8ea0-19c202ba724d", 00:08:37.868 "base_bdev": "aio_bdev", 00:08:37.868 "thin_provision": false, 00:08:37.868 "num_allocated_clusters": 38, 00:08:37.869 "snapshot": false, 00:08:37.869 "clone": false, 00:08:37.869 "esnap_clone": false 00:08:37.869 } 00:08:37.869 } 00:08:37.869 } 00:08:37.869 ] 00:08:37.869 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:37.869 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u afa2ec6f-94d8-46f5-8ea0-19c202ba724d 00:08:37.869 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:38.128 21:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:38.128 21:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u afa2ec6f-94d8-46f5-8ea0-19c202ba724d 00:08:38.128 21:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:38.128 21:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:38.128 21:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 55431f19-8eea-4de1-a6a6-370cb1e6df4f 00:08:38.387 21:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u afa2ec6f-94d8-46f5-8ea0-19c202ba724d 00:08:38.646 21:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:38.646 21:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:38.646 00:08:38.646 real 0m17.509s 00:08:38.646 user 0m43.765s 00:08:38.646 sys 0m4.989s 00:08:38.647 21:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.647 21:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:38.647 ************************************ 00:08:38.647 END TEST lvs_grow_dirty 00:08:38.647 ************************************ 00:08:38.906 21:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:38.906 21:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:08:38.906 21:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:08:38.906 21:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:08:38.906 21:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:38.906 21:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:08:38.906 21:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:08:38.906 21:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:08:38.906 21:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:38.906 nvmf_trace.0 00:08:38.906 21:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:08:38.906 21:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:38.906 21:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:38.906 21:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:08:38.906 21:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:38.906 21:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:08:38.906 21:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:38.906 21:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:38.906 rmmod nvme_tcp 00:08:38.906 rmmod nvme_fabrics 00:08:38.906 rmmod nvme_keyring 00:08:38.906 21:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:38.906 21:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:08:38.906 21:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:08:38.906 21:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2554851 ']' 00:08:38.906 21:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2554851 00:08:38.906 21:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 2554851 ']' 00:08:38.906 21:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 2554851 00:08:38.906 21:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:08:38.906 21:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:38.906 21:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2554851 00:08:38.906 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:38.906 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:38.906 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2554851' 00:08:38.906 killing process with pid 2554851 00:08:38.906 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 2554851 00:08:38.906 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 2554851 00:08:39.166 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:39.166 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:39.166 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:39.166 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:39.166 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:39.166 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.166 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:39.166 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.073 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:41.073 00:08:41.073 real 0m43.814s 00:08:41.073 user 1m4.614s 00:08:41.073 sys 0m12.739s 00:08:41.073 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:41.073 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:41.073 ************************************ 00:08:41.073 END TEST nvmf_lvs_grow 00:08:41.073 ************************************ 00:08:41.333 21:56:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:41.333 21:56:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:41.333 21:56:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:41.333 21:56:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:41.333 ************************************ 00:08:41.333 START TEST nvmf_bdev_io_wait 00:08:41.333 ************************************ 00:08:41.333 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:41.333 * Looking for test storage... 00:08:41.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:41.333 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:41.333 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:41.333 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:41.333 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:41.333 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:41.333 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:41.333 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:41.333 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:41.333 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:41.333 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:41.333 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:41.333 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:41.333 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:08:41.333 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:08:41.333 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:41.333 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:41.333 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:41.333 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:41.333 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:41.333 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:41.333 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:41.333 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:41.333 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.333 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.333 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.333 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:41.333 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.334 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:08:41.334 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:41.334 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:41.334 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:41.334 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:41.334 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:41.334 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:41.334 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:41.334 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:41.334 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:41.334 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:41.334 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:41.334 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:41.334 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:41.334 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:41.334 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:41.334 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:41.334 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.334 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:41.334 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.334 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:41.334 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:41.334 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:08:41.334 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:49.473 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:49.473 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:08:49.473 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:49.473 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:49.473 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:49.473 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:49.473 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:49.473 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:08:49.473 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:49.473 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:08:49.473 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:08:49.473 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:08:49.473 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:08:49.473 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:08:49.473 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:08:49.473 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:49.473 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:49.473 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:49.473 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:49.473 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:49.473 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:49.473 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:49.473 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:49.473 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:49.473 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:49.473 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:49.473 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:49.473 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:49.473 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:49.473 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:49.473 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:49.473 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:49.473 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:49.473 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:49.473 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:49.473 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:49.473 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:49.473 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.473 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.473 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:49.473 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:49.473 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:49.473 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:49.474 Found net devices under 0000:af:00.0: cvl_0_0 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:49.474 Found net devices under 0000:af:00.1: cvl_0_1 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:49.474 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:49.474 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:08:49.474 00:08:49.474 --- 10.0.0.2 ping statistics --- 00:08:49.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.474 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:49.474 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:49.474 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:08:49.474 00:08:49.474 --- 10.0.0.1 ping statistics --- 00:08:49.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.474 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2559378 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2559378 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 2559378 ']' 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:49.474 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:49.474 [2024-07-24 21:56:27.587058] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:08:49.474 [2024-07-24 21:56:27.587102] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:49.474 EAL: No free 2048 kB hugepages reported on node 1 00:08:49.474 [2024-07-24 21:56:27.664294] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:49.474 [2024-07-24 21:56:27.738813] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:49.474 [2024-07-24 21:56:27.738854] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:49.474 [2024-07-24 21:56:27.738864] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:49.474 [2024-07-24 21:56:27.738873] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:49.474 [2024-07-24 21:56:27.738881] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:49.474 [2024-07-24 21:56:27.738931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.474 [2024-07-24 21:56:27.738952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:49.474 [2024-07-24 21:56:27.739021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:49.474 [2024-07-24 21:56:27.739023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.474 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:49.474 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:08:49.474 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:49.474 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:49.474 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:49.474 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:49.474 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:49.474 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:49.475 [2024-07-24 21:56:28.504938] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:49.475 Malloc0 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:49.475 [2024-07-24 21:56:28.565881] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2559421 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2559424 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:49.475 { 00:08:49.475 "params": { 00:08:49.475 "name": "Nvme$subsystem", 00:08:49.475 "trtype": "$TEST_TRANSPORT", 00:08:49.475 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:49.475 "adrfam": "ipv4", 00:08:49.475 "trsvcid": "$NVMF_PORT", 00:08:49.475 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:49.475 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:49.475 "hdgst": ${hdgst:-false}, 00:08:49.475 "ddgst": ${ddgst:-false} 00:08:49.475 }, 00:08:49.475 "method": "bdev_nvme_attach_controller" 00:08:49.475 } 00:08:49.475 EOF 00:08:49.475 )") 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2559426 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:49.475 { 00:08:49.475 "params": { 00:08:49.475 "name": "Nvme$subsystem", 00:08:49.475 "trtype": "$TEST_TRANSPORT", 00:08:49.475 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:49.475 "adrfam": "ipv4", 00:08:49.475 "trsvcid": "$NVMF_PORT", 00:08:49.475 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:49.475 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:49.475 "hdgst": ${hdgst:-false}, 00:08:49.475 "ddgst": ${ddgst:-false} 00:08:49.475 }, 00:08:49.475 "method": "bdev_nvme_attach_controller" 00:08:49.475 } 00:08:49.475 EOF 00:08:49.475 )") 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:49.475 { 00:08:49.475 "params": { 00:08:49.475 "name": "Nvme$subsystem", 00:08:49.475 "trtype": "$TEST_TRANSPORT", 00:08:49.475 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:49.475 "adrfam": "ipv4", 00:08:49.475 "trsvcid": "$NVMF_PORT", 00:08:49.475 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:49.475 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:49.475 "hdgst": ${hdgst:-false}, 00:08:49.475 "ddgst": ${ddgst:-false} 00:08:49.475 }, 00:08:49.475 "method": "bdev_nvme_attach_controller" 00:08:49.475 } 00:08:49.475 EOF 00:08:49.475 )") 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2559430 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:49.475 { 00:08:49.475 "params": { 00:08:49.475 "name": "Nvme$subsystem", 00:08:49.475 "trtype": "$TEST_TRANSPORT", 00:08:49.475 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:49.475 "adrfam": "ipv4", 00:08:49.475 "trsvcid": "$NVMF_PORT", 00:08:49.475 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:49.475 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:49.475 "hdgst": ${hdgst:-false}, 00:08:49.475 "ddgst": ${ddgst:-false} 00:08:49.475 }, 00:08:49.475 "method": "bdev_nvme_attach_controller" 00:08:49.475 } 00:08:49.475 EOF 00:08:49.475 )") 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2559421 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:49.475 "params": { 00:08:49.475 "name": "Nvme1", 00:08:49.475 "trtype": "tcp", 00:08:49.475 "traddr": "10.0.0.2", 00:08:49.475 "adrfam": "ipv4", 00:08:49.475 "trsvcid": "4420", 00:08:49.475 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:49.475 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:49.475 "hdgst": false, 00:08:49.475 "ddgst": false 00:08:49.475 }, 00:08:49.475 "method": "bdev_nvme_attach_controller" 00:08:49.475 }' 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:49.475 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:49.475 "params": { 00:08:49.475 "name": "Nvme1", 00:08:49.476 "trtype": "tcp", 00:08:49.476 "traddr": "10.0.0.2", 00:08:49.476 "adrfam": "ipv4", 00:08:49.476 "trsvcid": "4420", 00:08:49.476 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:49.476 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:49.476 "hdgst": false, 00:08:49.476 "ddgst": false 00:08:49.476 }, 00:08:49.476 "method": "bdev_nvme_attach_controller" 00:08:49.476 }' 00:08:49.476 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:49.476 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:49.476 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:49.476 "params": { 00:08:49.476 "name": "Nvme1", 00:08:49.476 "trtype": "tcp", 00:08:49.476 "traddr": "10.0.0.2", 00:08:49.476 "adrfam": "ipv4", 00:08:49.476 "trsvcid": "4420", 00:08:49.476 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:49.476 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:49.476 "hdgst": false, 00:08:49.476 "ddgst": false 00:08:49.476 }, 00:08:49.476 "method": "bdev_nvme_attach_controller" 00:08:49.476 }' 00:08:49.476 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:49.476 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:49.476 "params": { 00:08:49.476 "name": "Nvme1", 00:08:49.476 "trtype": "tcp", 00:08:49.476 "traddr": "10.0.0.2", 00:08:49.476 "adrfam": "ipv4", 00:08:49.476 "trsvcid": "4420", 00:08:49.476 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:49.476 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:49.476 "hdgst": false, 00:08:49.476 "ddgst": false 00:08:49.476 }, 00:08:49.476 "method": "bdev_nvme_attach_controller" 00:08:49.476 }' 00:08:49.476 [2024-07-24 21:56:28.618534] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:08:49.476 [2024-07-24 21:56:28.618583] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:49.476 [2024-07-24 21:56:28.618818] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:08:49.476 [2024-07-24 21:56:28.618860] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:49.476 [2024-07-24 21:56:28.620264] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:08:49.476 [2024-07-24 21:56:28.620271] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:08:49.476 [2024-07-24 21:56:28.620313] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-24 21:56:28.620314] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:49.476 --proc-type=auto ] 00:08:49.476 EAL: No free 2048 kB hugepages reported on node 1 00:08:49.744 EAL: No free 2048 kB hugepages reported on node 1 00:08:49.744 [2024-07-24 21:56:28.813932] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.744 EAL: No free 2048 kB hugepages reported on node 1 00:08:49.744 [2024-07-24 21:56:28.888127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:49.744 [2024-07-24 21:56:28.919514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.744 EAL: No free 2048 kB hugepages reported on node 1 00:08:50.003 [2024-07-24 21:56:28.968805] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.003 [2024-07-24 21:56:29.000521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:08:50.003 [2024-07-24 21:56:29.027985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.003 [2024-07-24 21:56:29.043919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:08:50.003 [2024-07-24 21:56:29.102341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:08:50.003 Running I/O for 1 seconds... 00:08:50.003 Running I/O for 1 seconds... 00:08:50.263 Running I/O for 1 seconds... 00:08:50.263 Running I/O for 1 seconds... 00:08:51.201 00:08:51.201 Latency(us) 00:08:51.201 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:51.201 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:51.201 Nvme1n1 : 1.00 14352.47 56.06 0.00 0.00 8892.71 4954.52 16148.07 00:08:51.201 =================================================================================================================== 00:08:51.201 Total : 14352.47 56.06 0.00 0.00 8892.71 4954.52 16148.07 00:08:51.201 00:08:51.201 Latency(us) 00:08:51.201 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:51.201 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:51.201 Nvme1n1 : 1.01 6994.29 27.32 0.00 0.00 18161.51 8336.18 27892.12 00:08:51.201 =================================================================================================================== 00:08:51.201 Total : 6994.29 27.32 0.00 0.00 18161.51 8336.18 27892.12 00:08:51.201 00:08:51.201 Latency(us) 00:08:51.201 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:51.201 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:51.201 Nvme1n1 : 1.00 257597.80 1006.24 0.00 0.00 495.56 204.80 924.06 00:08:51.201 =================================================================================================================== 00:08:51.201 Total : 257597.80 1006.24 0.00 0.00 495.56 204.80 924.06 00:08:51.201 21:56:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2559424 00:08:51.201 00:08:51.201 Latency(us) 00:08:51.201 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:51.201 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:51.201 Nvme1n1 : 1.01 7485.55 29.24 0.00 0.00 17046.92 5819.60 43411.05 00:08:51.201 =================================================================================================================== 00:08:51.201 Total : 7485.55 29.24 0.00 0.00 17046.92 5819.60 43411.05 00:08:51.460 21:56:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2559426 00:08:51.460 21:56:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2559430 00:08:51.460 21:56:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:51.461 21:56:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.461 21:56:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:51.461 21:56:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.461 21:56:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:51.461 21:56:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:51.461 21:56:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:51.461 21:56:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:08:51.461 21:56:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:51.461 21:56:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:08:51.461 21:56:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:51.461 21:56:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:51.461 rmmod nvme_tcp 00:08:51.461 rmmod nvme_fabrics 00:08:51.461 rmmod nvme_keyring 00:08:51.461 21:56:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:51.461 21:56:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:08:51.461 21:56:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:08:51.461 21:56:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2559378 ']' 00:08:51.461 21:56:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2559378 00:08:51.461 21:56:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 2559378 ']' 00:08:51.461 21:56:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 2559378 00:08:51.461 21:56:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:08:51.461 21:56:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:51.461 21:56:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2559378 00:08:51.720 21:56:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:51.720 21:56:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:51.720 21:56:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2559378' 00:08:51.720 killing process with pid 2559378 00:08:51.720 21:56:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 2559378 00:08:51.720 21:56:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 2559378 00:08:51.720 21:56:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:51.720 21:56:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:51.720 21:56:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:51.720 21:56:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:51.720 21:56:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:51.720 21:56:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.720 21:56:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:51.720 21:56:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:54.258 21:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:54.258 00:08:54.258 real 0m12.575s 00:08:54.258 user 0m19.517s 00:08:54.258 sys 0m7.334s 00:08:54.258 21:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:54.258 21:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:54.258 ************************************ 00:08:54.258 END TEST nvmf_bdev_io_wait 00:08:54.258 ************************************ 00:08:54.258 21:56:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:54.258 21:56:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:54.258 21:56:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:54.258 21:56:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:54.258 ************************************ 00:08:54.258 START TEST nvmf_queue_depth 00:08:54.258 ************************************ 00:08:54.258 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:54.258 * Looking for test storage... 00:08:54.258 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:54.258 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:54.258 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:54.258 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:54.258 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:54.258 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:54.258 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:54.258 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:54.258 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:54.258 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:54.258 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:54.258 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:54.258 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:54.258 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:08:54.258 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:08:54.258 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:54.258 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:54.258 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:54.258 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:54.259 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:54.259 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:54.259 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:54.259 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:54.259 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.259 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.259 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.259 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:54.259 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.259 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:08:54.259 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:54.259 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:54.259 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:54.259 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:54.259 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:54.259 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:54.259 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:54.259 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:54.259 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:54.259 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:54.259 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:54.259 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:54.259 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:54.259 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:54.259 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:54.259 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:54.259 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:54.259 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.259 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:54.259 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:54.259 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:54.259 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:54.259 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:08:54.259 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:00.831 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:00.831 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:09:00.831 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:00.831 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:00.831 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:00.831 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:00.831 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:00.831 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:09:00.831 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:00.831 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:09:00.831 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:09:00.831 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:09:00.831 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:09:00.831 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:09:00.831 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:09:00.831 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:00.831 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:00.831 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:00.831 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:00.831 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:00.831 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:00.831 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:00.831 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:00.831 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:00.831 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:00.831 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:00.832 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:00.832 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:00.832 Found net devices under 0000:af:00.0: cvl_0_0 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:00.832 Found net devices under 0000:af:00.1: cvl_0_1 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:00.832 21:56:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:01.091 21:56:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:01.091 21:56:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:01.091 21:56:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:01.091 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:01.091 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:09:01.091 00:09:01.091 --- 10.0.0.2 ping statistics --- 00:09:01.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.091 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:09:01.091 21:56:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:01.091 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:01.091 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:09:01.091 00:09:01.091 --- 10.0.0.1 ping statistics --- 00:09:01.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.091 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:09:01.091 21:56:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:01.091 21:56:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:09:01.091 21:56:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:01.091 21:56:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:01.091 21:56:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:01.092 21:56:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:01.092 21:56:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:01.092 21:56:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:01.092 21:56:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:01.092 21:56:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:01.092 21:56:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:01.092 21:56:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:01.092 21:56:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:01.092 21:56:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2563641 00:09:01.092 21:56:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:01.092 21:56:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2563641 00:09:01.092 21:56:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2563641 ']' 00:09:01.092 21:56:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.092 21:56:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:01.092 21:56:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.092 21:56:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:01.092 21:56:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:01.092 [2024-07-24 21:56:40.227790] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:09:01.092 [2024-07-24 21:56:40.227837] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:01.092 EAL: No free 2048 kB hugepages reported on node 1 00:09:01.092 [2024-07-24 21:56:40.300332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.351 [2024-07-24 21:56:40.368106] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:01.351 [2024-07-24 21:56:40.368149] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:01.351 [2024-07-24 21:56:40.368158] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:01.351 [2024-07-24 21:56:40.368166] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:01.351 [2024-07-24 21:56:40.368189] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:01.351 [2024-07-24 21:56:40.368219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.919 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:01.919 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:01.919 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:01.919 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:01.919 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:01.919 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:01.919 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:01.919 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.919 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:01.919 [2024-07-24 21:56:41.074301] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:01.919 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.919 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:01.919 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.919 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:01.919 Malloc0 00:09:01.919 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.919 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:01.919 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.919 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:01.919 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.919 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:01.919 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.919 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:01.919 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.919 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:01.919 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.919 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:02.179 [2024-07-24 21:56:41.132877] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:02.179 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.179 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2563814 00:09:02.179 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:02.179 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:02.179 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2563814 /var/tmp/bdevperf.sock 00:09:02.179 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2563814 ']' 00:09:02.179 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:02.179 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:02.179 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:02.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:02.179 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:02.179 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:02.179 [2024-07-24 21:56:41.184678] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:09:02.179 [2024-07-24 21:56:41.184736] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2563814 ] 00:09:02.179 EAL: No free 2048 kB hugepages reported on node 1 00:09:02.179 [2024-07-24 21:56:41.254425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.179 [2024-07-24 21:56:41.328180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.115 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:03.115 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:03.115 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:03.115 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.115 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.115 NVMe0n1 00:09:03.115 21:56:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.115 21:56:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:03.115 Running I/O for 10 seconds... 00:09:13.103 00:09:13.103 Latency(us) 00:09:13.103 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.103 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:13.103 Verification LBA range: start 0x0 length 0x4000 00:09:13.103 NVMe0n1 : 10.06 13116.09 51.23 0.00 0.00 77828.33 18245.22 54525.95 00:09:13.103 =================================================================================================================== 00:09:13.103 Total : 13116.09 51.23 0.00 0.00 77828.33 18245.22 54525.95 00:09:13.103 0 00:09:13.103 21:56:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2563814 00:09:13.103 21:56:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2563814 ']' 00:09:13.103 21:56:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2563814 00:09:13.103 21:56:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:13.103 21:56:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:13.103 21:56:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2563814 00:09:13.103 21:56:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:13.103 21:56:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:13.103 21:56:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2563814' 00:09:13.103 killing process with pid 2563814 00:09:13.103 21:56:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2563814 00:09:13.103 Received shutdown signal, test time was about 10.000000 seconds 00:09:13.103 00:09:13.103 Latency(us) 00:09:13.103 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.103 =================================================================================================================== 00:09:13.103 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:13.103 21:56:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2563814 00:09:13.364 21:56:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:13.364 21:56:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:13.364 21:56:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:13.364 21:56:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:09:13.364 21:56:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:13.364 21:56:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:09:13.364 21:56:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:13.364 21:56:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:13.364 rmmod nvme_tcp 00:09:13.364 rmmod nvme_fabrics 00:09:13.364 rmmod nvme_keyring 00:09:13.364 21:56:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:13.364 21:56:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:09:13.364 21:56:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:09:13.364 21:56:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2563641 ']' 00:09:13.364 21:56:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2563641 00:09:13.364 21:56:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2563641 ']' 00:09:13.364 21:56:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2563641 00:09:13.364 21:56:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:13.364 21:56:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:13.364 21:56:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2563641 00:09:13.657 21:56:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:13.657 21:56:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:13.657 21:56:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2563641' 00:09:13.657 killing process with pid 2563641 00:09:13.657 21:56:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2563641 00:09:13.657 21:56:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2563641 00:09:13.657 21:56:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:13.657 21:56:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:13.657 21:56:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:13.657 21:56:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:13.657 21:56:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:13.657 21:56:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.657 21:56:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:13.657 21:56:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.197 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:16.197 00:09:16.197 real 0m21.860s 00:09:16.197 user 0m24.781s 00:09:16.197 sys 0m7.338s 00:09:16.197 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:16.197 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:16.197 ************************************ 00:09:16.197 END TEST nvmf_queue_depth 00:09:16.197 ************************************ 00:09:16.197 21:56:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:16.197 21:56:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:16.197 21:56:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:16.197 21:56:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:16.197 ************************************ 00:09:16.197 START TEST nvmf_target_multipath 00:09:16.197 ************************************ 00:09:16.197 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:16.197 * Looking for test storage... 00:09:16.197 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:09:16.197 21:56:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:22.772 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:22.772 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:09:22.772 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:22.772 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:22.772 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:22.772 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:22.772 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:22.772 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:09:22.772 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:22.772 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:09:22.772 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:09:22.772 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:09:22.772 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:09:22.772 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:09:22.772 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:09:22.772 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:22.772 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:22.772 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:22.772 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:22.772 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:22.772 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:22.772 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:22.773 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:22.773 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:22.773 Found net devices under 0000:af:00.0: cvl_0_0 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:22.773 Found net devices under 0000:af:00.1: cvl_0_1 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:22.773 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:22.773 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:09:22.773 00:09:22.773 --- 10.0.0.2 ping statistics --- 00:09:22.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.773 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:22.773 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:22.773 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:09:22.773 00:09:22.773 --- 10.0.0.1 ping statistics --- 00:09:22.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.773 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:22.773 only one NIC for nvmf test 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:22.773 rmmod nvme_tcp 00:09:22.773 rmmod nvme_fabrics 00:09:22.773 rmmod nvme_keyring 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:22.773 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:22.774 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:22.774 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:22.774 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:22.774 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:22.774 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.774 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:22.774 21:57:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.681 21:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:24.681 21:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:24.681 21:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:24.681 21:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:24.681 21:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:24.681 21:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:24.681 21:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:24.681 21:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:24.681 21:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:24.681 21:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:24.681 21:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:24.681 21:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:24.681 21:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:24.681 21:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:24.681 21:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:24.681 21:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:24.681 21:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:24.681 21:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:24.681 21:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.681 21:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:24.681 21:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.681 21:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:24.681 00:09:24.681 real 0m8.826s 00:09:24.681 user 0m1.706s 00:09:24.681 sys 0m5.024s 00:09:24.681 21:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:24.681 21:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:24.681 ************************************ 00:09:24.681 END TEST nvmf_target_multipath 00:09:24.681 ************************************ 00:09:24.681 21:57:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:24.681 21:57:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:24.681 21:57:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:24.681 21:57:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:24.681 ************************************ 00:09:24.681 START TEST nvmf_zcopy 00:09:24.681 ************************************ 00:09:24.681 21:57:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:24.941 * Looking for test storage... 00:09:24.941 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:24.941 21:57:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:24.941 21:57:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:24.941 21:57:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:24.941 21:57:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:24.941 21:57:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:24.941 21:57:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:24.941 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:24.941 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:24.941 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:24.941 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:24.941 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:24.941 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:24.941 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:09:24.941 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:09:24.941 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:24.941 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:24.941 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:24.941 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:24.941 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:24.941 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:24.941 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:24.941 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:24.941 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.941 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.941 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.941 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:24.941 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.941 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:09:24.941 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:24.942 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:24.942 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:24.942 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:24.942 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:24.942 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:24.942 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:24.942 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:24.942 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:24.942 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:24.942 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:24.942 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:24.942 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:24.942 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:24.942 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.942 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:24.942 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.942 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:24.942 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:24.942 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:09:24.942 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:31.514 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:31.514 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:31.514 Found net devices under 0000:af:00.0: cvl_0_0 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:31.514 Found net devices under 0000:af:00.1: cvl_0_1 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:31.514 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:31.515 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:31.515 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:31.515 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:31.515 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:31.515 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:31.515 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:31.515 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:31.515 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:31.515 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:31.515 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:31.774 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:31.774 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:31.774 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:31.774 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:31.774 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:31.774 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:31.774 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:31.774 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:31.774 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:09:31.774 00:09:31.774 --- 10.0.0.2 ping statistics --- 00:09:31.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.774 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:09:31.774 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:31.774 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:31.774 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:09:31.774 00:09:31.774 --- 10.0.0.1 ping statistics --- 00:09:31.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.774 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:09:31.774 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:31.774 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:09:31.774 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:31.774 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:31.774 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:31.774 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:31.774 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:31.774 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:31.774 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:31.774 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:31.774 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:31.774 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:31.774 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.774 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2573672 00:09:31.774 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:31.774 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2573672 00:09:31.774 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 2573672 ']' 00:09:31.774 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.774 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:31.774 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.774 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:31.774 21:57:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:32.032 [2024-07-24 21:57:11.007605] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:09:32.032 [2024-07-24 21:57:11.007654] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.032 EAL: No free 2048 kB hugepages reported on node 1 00:09:32.032 [2024-07-24 21:57:11.081638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.032 [2024-07-24 21:57:11.152902] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:32.032 [2024-07-24 21:57:11.152936] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:32.032 [2024-07-24 21:57:11.152945] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:32.032 [2024-07-24 21:57:11.152953] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:32.032 [2024-07-24 21:57:11.152976] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:32.032 [2024-07-24 21:57:11.152995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.598 21:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:32.598 21:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:09:32.598 21:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:32.598 21:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:32.598 21:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:32.856 21:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:32.856 21:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:32.857 21:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:32.857 21:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.857 21:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:32.857 [2024-07-24 21:57:11.849849] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:32.857 21:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.857 21:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:32.857 21:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.857 21:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:32.857 21:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.857 21:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:32.857 21:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.857 21:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:32.857 [2024-07-24 21:57:11.870010] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:32.857 21:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.857 21:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:32.857 21:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.857 21:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:32.857 21:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.857 21:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:32.857 21:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.857 21:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:32.857 malloc0 00:09:32.857 21:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.857 21:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:32.857 21:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.857 21:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:32.857 21:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.857 21:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:32.857 21:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:32.857 21:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:32.857 21:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:32.857 21:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:32.857 21:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:32.857 { 00:09:32.857 "params": { 00:09:32.857 "name": "Nvme$subsystem", 00:09:32.857 "trtype": "$TEST_TRANSPORT", 00:09:32.857 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:32.857 "adrfam": "ipv4", 00:09:32.857 "trsvcid": "$NVMF_PORT", 00:09:32.857 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:32.857 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:32.857 "hdgst": ${hdgst:-false}, 00:09:32.857 "ddgst": ${ddgst:-false} 00:09:32.857 }, 00:09:32.857 "method": "bdev_nvme_attach_controller" 00:09:32.857 } 00:09:32.857 EOF 00:09:32.857 )") 00:09:32.857 21:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:32.857 21:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:32.857 21:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:32.857 21:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:32.857 "params": { 00:09:32.857 "name": "Nvme1", 00:09:32.857 "trtype": "tcp", 00:09:32.857 "traddr": "10.0.0.2", 00:09:32.857 "adrfam": "ipv4", 00:09:32.857 "trsvcid": "4420", 00:09:32.857 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:32.857 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:32.857 "hdgst": false, 00:09:32.857 "ddgst": false 00:09:32.857 }, 00:09:32.857 "method": "bdev_nvme_attach_controller" 00:09:32.857 }' 00:09:32.857 [2024-07-24 21:57:11.980920] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:09:32.857 [2024-07-24 21:57:11.980967] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2573783 ] 00:09:32.857 EAL: No free 2048 kB hugepages reported on node 1 00:09:32.857 [2024-07-24 21:57:12.052219] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.116 [2024-07-24 21:57:12.122524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.116 Running I/O for 10 seconds... 00:09:43.160 00:09:43.160 Latency(us) 00:09:43.160 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:43.160 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:43.160 Verification LBA range: start 0x0 length 0x1000 00:09:43.160 Nvme1n1 : 10.01 8957.53 69.98 0.00 0.00 14248.68 1323.83 31876.71 00:09:43.160 =================================================================================================================== 00:09:43.160 Total : 8957.53 69.98 0.00 0.00 14248.68 1323.83 31876.71 00:09:43.420 21:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2575544 00:09:43.420 21:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:43.420 21:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:43.420 21:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:43.420 21:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:43.420 21:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:43.420 21:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:43.420 21:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:43.420 21:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:43.420 { 00:09:43.420 "params": { 00:09:43.420 "name": "Nvme$subsystem", 00:09:43.420 "trtype": "$TEST_TRANSPORT", 00:09:43.420 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:43.420 "adrfam": "ipv4", 00:09:43.420 "trsvcid": "$NVMF_PORT", 00:09:43.420 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:43.420 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:43.420 "hdgst": ${hdgst:-false}, 00:09:43.420 "ddgst": ${ddgst:-false} 00:09:43.420 }, 00:09:43.420 "method": "bdev_nvme_attach_controller" 00:09:43.420 } 00:09:43.420 EOF 00:09:43.420 )") 00:09:43.420 [2024-07-24 21:57:22.502288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.420 [2024-07-24 21:57:22.502323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.420 21:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:43.420 21:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:43.420 21:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:43.420 21:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:43.420 "params": { 00:09:43.420 "name": "Nvme1", 00:09:43.420 "trtype": "tcp", 00:09:43.420 "traddr": "10.0.0.2", 00:09:43.420 "adrfam": "ipv4", 00:09:43.420 "trsvcid": "4420", 00:09:43.420 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:43.420 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:43.420 "hdgst": false, 00:09:43.420 "ddgst": false 00:09:43.420 }, 00:09:43.420 "method": "bdev_nvme_attach_controller" 00:09:43.420 }' 00:09:43.420 [2024-07-24 21:57:22.514293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.420 [2024-07-24 21:57:22.514306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.420 [2024-07-24 21:57:22.526321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.420 [2024-07-24 21:57:22.526334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.420 [2024-07-24 21:57:22.538352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.420 [2024-07-24 21:57:22.538363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.420 [2024-07-24 21:57:22.541757] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:09:43.420 [2024-07-24 21:57:22.541805] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2575544 ] 00:09:43.420 [2024-07-24 21:57:22.550383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.420 [2024-07-24 21:57:22.550394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.420 [2024-07-24 21:57:22.562412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.420 [2024-07-24 21:57:22.562423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.420 EAL: No free 2048 kB hugepages reported on node 1 00:09:43.420 [2024-07-24 21:57:22.574443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.420 [2024-07-24 21:57:22.574453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.420 [2024-07-24 21:57:22.586476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.420 [2024-07-24 21:57:22.586486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.420 [2024-07-24 21:57:22.598508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.421 [2024-07-24 21:57:22.598522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.421 [2024-07-24 21:57:22.610540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.421 [2024-07-24 21:57:22.610552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.421 [2024-07-24 21:57:22.611564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.421 [2024-07-24 21:57:22.622573] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.421 [2024-07-24 21:57:22.622588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.680 [2024-07-24 21:57:22.634603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.680 [2024-07-24 21:57:22.634616] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.680 [2024-07-24 21:57:22.646635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.680 [2024-07-24 21:57:22.646651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.680 [2024-07-24 21:57:22.658669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.680 [2024-07-24 21:57:22.658689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.680 [2024-07-24 21:57:22.670699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.680 [2024-07-24 21:57:22.670711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.680 [2024-07-24 21:57:22.682737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.680 [2024-07-24 21:57:22.682750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.680 [2024-07-24 21:57:22.683236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.680 [2024-07-24 21:57:22.694771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.680 [2024-07-24 21:57:22.694789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.680 [2024-07-24 21:57:22.706800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.681 [2024-07-24 21:57:22.706818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.681 [2024-07-24 21:57:22.718843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.681 [2024-07-24 21:57:22.718858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.681 [2024-07-24 21:57:22.730870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.681 [2024-07-24 21:57:22.730884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.681 [2024-07-24 21:57:22.742893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.681 [2024-07-24 21:57:22.742906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.681 [2024-07-24 21:57:22.754922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.681 [2024-07-24 21:57:22.754933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.681 [2024-07-24 21:57:22.766969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.681 [2024-07-24 21:57:22.766989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.681 [2024-07-24 21:57:22.778996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.681 [2024-07-24 21:57:22.779011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.681 [2024-07-24 21:57:22.791025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.681 [2024-07-24 21:57:22.791041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.681 [2024-07-24 21:57:22.803054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.681 [2024-07-24 21:57:22.803069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.681 [2024-07-24 21:57:22.815083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.681 [2024-07-24 21:57:22.815101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.681 [2024-07-24 21:57:22.827116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.681 [2024-07-24 21:57:22.827135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.681 Running I/O for 5 seconds... 00:09:43.681 [2024-07-24 21:57:22.839140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.681 [2024-07-24 21:57:22.839152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.681 [2024-07-24 21:57:22.855424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.681 [2024-07-24 21:57:22.855444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.681 [2024-07-24 21:57:22.867699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.681 [2024-07-24 21:57:22.867724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.681 [2024-07-24 21:57:22.881982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.681 [2024-07-24 21:57:22.882003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.940 [2024-07-24 21:57:22.895592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.940 [2024-07-24 21:57:22.895613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.940 [2024-07-24 21:57:22.906786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.940 [2024-07-24 21:57:22.906806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.940 [2024-07-24 21:57:22.920756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.940 [2024-07-24 21:57:22.920777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.940 [2024-07-24 21:57:22.934468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.940 [2024-07-24 21:57:22.934489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.940 [2024-07-24 21:57:22.948265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.940 [2024-07-24 21:57:22.948285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.940 [2024-07-24 21:57:22.961859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.940 [2024-07-24 21:57:22.961879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.940 [2024-07-24 21:57:22.975276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.940 [2024-07-24 21:57:22.975296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.940 [2024-07-24 21:57:22.988828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.940 [2024-07-24 21:57:22.988847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.940 [2024-07-24 21:57:23.002764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.940 [2024-07-24 21:57:23.002785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.940 [2024-07-24 21:57:23.016289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.940 [2024-07-24 21:57:23.016309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.940 [2024-07-24 21:57:23.030107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.940 [2024-07-24 21:57:23.030128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.940 [2024-07-24 21:57:23.043668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.941 [2024-07-24 21:57:23.043688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.941 [2024-07-24 21:57:23.057841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.941 [2024-07-24 21:57:23.057861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.941 [2024-07-24 21:57:23.068241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.941 [2024-07-24 21:57:23.068261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.941 [2024-07-24 21:57:23.082083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.941 [2024-07-24 21:57:23.082106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.941 [2024-07-24 21:57:23.095574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.941 [2024-07-24 21:57:23.095594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.941 [2024-07-24 21:57:23.109081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.941 [2024-07-24 21:57:23.109105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.941 [2024-07-24 21:57:23.122495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.941 [2024-07-24 21:57:23.122515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.941 [2024-07-24 21:57:23.135804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.941 [2024-07-24 21:57:23.135824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.941 [2024-07-24 21:57:23.149256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.941 [2024-07-24 21:57:23.149276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.200 [2024-07-24 21:57:23.162797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.200 [2024-07-24 21:57:23.162817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.200 [2024-07-24 21:57:23.176172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.200 [2024-07-24 21:57:23.176191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.200 [2024-07-24 21:57:23.189964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.200 [2024-07-24 21:57:23.189983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.200 [2024-07-24 21:57:23.203178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.200 [2024-07-24 21:57:23.203198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.200 [2024-07-24 21:57:23.216667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.200 [2024-07-24 21:57:23.216687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.200 [2024-07-24 21:57:23.229954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.200 [2024-07-24 21:57:23.229973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.200 [2024-07-24 21:57:23.243732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.200 [2024-07-24 21:57:23.243767] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.200 [2024-07-24 21:57:23.257433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.200 [2024-07-24 21:57:23.257452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.200 [2024-07-24 21:57:23.270867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.200 [2024-07-24 21:57:23.270887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.200 [2024-07-24 21:57:23.284448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.200 [2024-07-24 21:57:23.284467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.200 [2024-07-24 21:57:23.297935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.200 [2024-07-24 21:57:23.297954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.200 [2024-07-24 21:57:23.311508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.200 [2024-07-24 21:57:23.311527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.200 [2024-07-24 21:57:23.324984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.200 [2024-07-24 21:57:23.325003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.200 [2024-07-24 21:57:23.338901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.200 [2024-07-24 21:57:23.338920] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.200 [2024-07-24 21:57:23.352939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.200 [2024-07-24 21:57:23.352958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.200 [2024-07-24 21:57:23.366407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.200 [2024-07-24 21:57:23.366427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.200 [2024-07-24 21:57:23.380061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.200 [2024-07-24 21:57:23.380081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.200 [2024-07-24 21:57:23.393650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.200 [2024-07-24 21:57:23.393669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.200 [2024-07-24 21:57:23.407710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.200 [2024-07-24 21:57:23.407734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.459 [2024-07-24 21:57:23.421276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.460 [2024-07-24 21:57:23.421296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.460 [2024-07-24 21:57:23.434932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.460 [2024-07-24 21:57:23.434951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.460 [2024-07-24 21:57:23.448235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.460 [2024-07-24 21:57:23.448254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.460 [2024-07-24 21:57:23.462291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.460 [2024-07-24 21:57:23.462311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.460 [2024-07-24 21:57:23.475861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.460 [2024-07-24 21:57:23.475880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.460 [2024-07-24 21:57:23.489613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.460 [2024-07-24 21:57:23.489633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.460 [2024-07-24 21:57:23.503167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.460 [2024-07-24 21:57:23.503187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.460 [2024-07-24 21:57:23.517447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.460 [2024-07-24 21:57:23.517466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.460 [2024-07-24 21:57:23.533499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.460 [2024-07-24 21:57:23.533518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.460 [2024-07-24 21:57:23.547537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.460 [2024-07-24 21:57:23.547558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.460 [2024-07-24 21:57:23.558687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.460 [2024-07-24 21:57:23.558707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.460 [2024-07-24 21:57:23.572401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.460 [2024-07-24 21:57:23.572421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.460 [2024-07-24 21:57:23.586054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.460 [2024-07-24 21:57:23.586073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.460 [2024-07-24 21:57:23.599884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.460 [2024-07-24 21:57:23.599904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.460 [2024-07-24 21:57:23.613439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.460 [2024-07-24 21:57:23.613458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.460 [2024-07-24 21:57:23.627473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.460 [2024-07-24 21:57:23.627492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.460 [2024-07-24 21:57:23.643071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.460 [2024-07-24 21:57:23.643091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.460 [2024-07-24 21:57:23.656673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.460 [2024-07-24 21:57:23.656693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.460 [2024-07-24 21:57:23.670053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.460 [2024-07-24 21:57:23.670073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.719 [2024-07-24 21:57:23.684061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.719 [2024-07-24 21:57:23.684081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.719 [2024-07-24 21:57:23.697566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.719 [2024-07-24 21:57:23.697586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.719 [2024-07-24 21:57:23.711185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.719 [2024-07-24 21:57:23.711204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.719 [2024-07-24 21:57:23.724965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.719 [2024-07-24 21:57:23.724984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.719 [2024-07-24 21:57:23.738681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.719 [2024-07-24 21:57:23.738700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.719 [2024-07-24 21:57:23.749858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.719 [2024-07-24 21:57:23.749878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.719 [2024-07-24 21:57:23.764186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.719 [2024-07-24 21:57:23.764206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.719 [2024-07-24 21:57:23.778117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.719 [2024-07-24 21:57:23.778137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.719 [2024-07-24 21:57:23.789411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.719 [2024-07-24 21:57:23.789430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.719 [2024-07-24 21:57:23.803092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.719 [2024-07-24 21:57:23.803111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.719 [2024-07-24 21:57:23.816206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.719 [2024-07-24 21:57:23.816226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.719 [2024-07-24 21:57:23.830117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.719 [2024-07-24 21:57:23.830141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.719 [2024-07-24 21:57:23.843611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.719 [2024-07-24 21:57:23.843629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.719 [2024-07-24 21:57:23.856844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.719 [2024-07-24 21:57:23.856864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.719 [2024-07-24 21:57:23.870577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.719 [2024-07-24 21:57:23.870597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.719 [2024-07-24 21:57:23.884293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.719 [2024-07-24 21:57:23.884313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.719 [2024-07-24 21:57:23.897903] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.719 [2024-07-24 21:57:23.897922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.719 [2024-07-24 21:57:23.911927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.719 [2024-07-24 21:57:23.911949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.719 [2024-07-24 21:57:23.927680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.719 [2024-07-24 21:57:23.927699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.978 [2024-07-24 21:57:23.941554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.978 [2024-07-24 21:57:23.941573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.978 [2024-07-24 21:57:23.954775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.978 [2024-07-24 21:57:23.954795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.978 [2024-07-24 21:57:23.969053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.978 [2024-07-24 21:57:23.969071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.978 [2024-07-24 21:57:23.983625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.978 [2024-07-24 21:57:23.983645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.978 [2024-07-24 21:57:23.998265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.978 [2024-07-24 21:57:23.998284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.978 [2024-07-24 21:57:24.011947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.978 [2024-07-24 21:57:24.011966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.978 [2024-07-24 21:57:24.026099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.978 [2024-07-24 21:57:24.026119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.978 [2024-07-24 21:57:24.039836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.978 [2024-07-24 21:57:24.039855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.978 [2024-07-24 21:57:24.051522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.978 [2024-07-24 21:57:24.051541] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.978 [2024-07-24 21:57:24.066091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.978 [2024-07-24 21:57:24.066109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.978 [2024-07-24 21:57:24.077299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.978 [2024-07-24 21:57:24.077318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.978 [2024-07-24 21:57:24.090956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.978 [2024-07-24 21:57:24.090980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.978 [2024-07-24 21:57:24.104692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.978 [2024-07-24 21:57:24.104711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.978 [2024-07-24 21:57:24.118508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.979 [2024-07-24 21:57:24.118528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.979 [2024-07-24 21:57:24.132256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.979 [2024-07-24 21:57:24.132275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.979 [2024-07-24 21:57:24.146286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.979 [2024-07-24 21:57:24.146305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.979 [2024-07-24 21:57:24.157720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.979 [2024-07-24 21:57:24.157739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.979 [2024-07-24 21:57:24.172002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.979 [2024-07-24 21:57:24.172021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.979 [2024-07-24 21:57:24.187543] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.979 [2024-07-24 21:57:24.187563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.238 [2024-07-24 21:57:24.201148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.238 [2024-07-24 21:57:24.201169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.238 [2024-07-24 21:57:24.214618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.238 [2024-07-24 21:57:24.214638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.238 [2024-07-24 21:57:24.228232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.238 [2024-07-24 21:57:24.228253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.238 [2024-07-24 21:57:24.241802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.238 [2024-07-24 21:57:24.241822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.238 [2024-07-24 21:57:24.255524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.238 [2024-07-24 21:57:24.255544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.238 [2024-07-24 21:57:24.269416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.238 [2024-07-24 21:57:24.269435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.238 [2024-07-24 21:57:24.282690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.238 [2024-07-24 21:57:24.282708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.238 [2024-07-24 21:57:24.296368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.238 [2024-07-24 21:57:24.296387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.238 [2024-07-24 21:57:24.309907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.238 [2024-07-24 21:57:24.309926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.238 [2024-07-24 21:57:24.323441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.238 [2024-07-24 21:57:24.323460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.238 [2024-07-24 21:57:24.337261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.238 [2024-07-24 21:57:24.337282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.238 [2024-07-24 21:57:24.350652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.238 [2024-07-24 21:57:24.350675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.238 [2024-07-24 21:57:24.364065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.238 [2024-07-24 21:57:24.364084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.238 [2024-07-24 21:57:24.377603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.238 [2024-07-24 21:57:24.377623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.238 [2024-07-24 21:57:24.390875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.238 [2024-07-24 21:57:24.390895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.238 [2024-07-24 21:57:24.404429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.238 [2024-07-24 21:57:24.404449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.238 [2024-07-24 21:57:24.417867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.238 [2024-07-24 21:57:24.417887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.238 [2024-07-24 21:57:24.431415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.238 [2024-07-24 21:57:24.431435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.238 [2024-07-24 21:57:24.444909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.238 [2024-07-24 21:57:24.444929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.497 [2024-07-24 21:57:24.458378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.497 [2024-07-24 21:57:24.458398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.497 [2024-07-24 21:57:24.471787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.497 [2024-07-24 21:57:24.471808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.497 [2024-07-24 21:57:24.485134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.497 [2024-07-24 21:57:24.485154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.497 [2024-07-24 21:57:24.498679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.497 [2024-07-24 21:57:24.498700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.497 [2024-07-24 21:57:24.512326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.497 [2024-07-24 21:57:24.512346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.497 [2024-07-24 21:57:24.525988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.497 [2024-07-24 21:57:24.526009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.497 [2024-07-24 21:57:24.539146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.497 [2024-07-24 21:57:24.539166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.497 [2024-07-24 21:57:24.552938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.497 [2024-07-24 21:57:24.552958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.497 [2024-07-24 21:57:24.566769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.497 [2024-07-24 21:57:24.566790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.497 [2024-07-24 21:57:24.580614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.497 [2024-07-24 21:57:24.580635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.497 [2024-07-24 21:57:24.594329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.497 [2024-07-24 21:57:24.594349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.497 [2024-07-24 21:57:24.607796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.497 [2024-07-24 21:57:24.607820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.497 [2024-07-24 21:57:24.621068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.497 [2024-07-24 21:57:24.621088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.497 [2024-07-24 21:57:24.634873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.497 [2024-07-24 21:57:24.634893] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.497 [2024-07-24 21:57:24.648352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.497 [2024-07-24 21:57:24.648372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.497 [2024-07-24 21:57:24.661646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.497 [2024-07-24 21:57:24.661665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.498 [2024-07-24 21:57:24.675219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.498 [2024-07-24 21:57:24.675238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.498 [2024-07-24 21:57:24.688605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.498 [2024-07-24 21:57:24.688625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.498 [2024-07-24 21:57:24.702208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.498 [2024-07-24 21:57:24.702228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.757 [2024-07-24 21:57:24.716053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.757 [2024-07-24 21:57:24.716077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.757 [2024-07-24 21:57:24.729881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.757 [2024-07-24 21:57:24.729901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.757 [2024-07-24 21:57:24.743009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.757 [2024-07-24 21:57:24.743028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.757 [2024-07-24 21:57:24.756621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.757 [2024-07-24 21:57:24.756641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.757 [2024-07-24 21:57:24.770347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.757 [2024-07-24 21:57:24.770366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.757 [2024-07-24 21:57:24.783868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.757 [2024-07-24 21:57:24.783888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.757 [2024-07-24 21:57:24.797238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.757 [2024-07-24 21:57:24.797257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.757 [2024-07-24 21:57:24.810614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.757 [2024-07-24 21:57:24.810633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.757 [2024-07-24 21:57:24.824265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.757 [2024-07-24 21:57:24.824284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.757 [2024-07-24 21:57:24.837723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.757 [2024-07-24 21:57:24.837741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.757 [2024-07-24 21:57:24.851106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.757 [2024-07-24 21:57:24.851125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.757 [2024-07-24 21:57:24.864413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.757 [2024-07-24 21:57:24.864432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.757 [2024-07-24 21:57:24.878133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.757 [2024-07-24 21:57:24.878153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.757 [2024-07-24 21:57:24.891506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.757 [2024-07-24 21:57:24.891526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.757 [2024-07-24 21:57:24.905251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.757 [2024-07-24 21:57:24.905271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.757 [2024-07-24 21:57:24.918793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.757 [2024-07-24 21:57:24.918813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.757 [2024-07-24 21:57:24.932212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.757 [2024-07-24 21:57:24.932231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.757 [2024-07-24 21:57:24.945552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.757 [2024-07-24 21:57:24.945572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.757 [2024-07-24 21:57:24.959727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.757 [2024-07-24 21:57:24.959745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.017 [2024-07-24 21:57:24.975201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.017 [2024-07-24 21:57:24.975222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.017 [2024-07-24 21:57:24.988702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.017 [2024-07-24 21:57:24.988726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.017 [2024-07-24 21:57:25.001908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.017 [2024-07-24 21:57:25.001927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.017 [2024-07-24 21:57:25.015786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.017 [2024-07-24 21:57:25.015805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.017 [2024-07-24 21:57:25.029175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.017 [2024-07-24 21:57:25.029194] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.017 [2024-07-24 21:57:25.042667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.017 [2024-07-24 21:57:25.042686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.017 [2024-07-24 21:57:25.056756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.017 [2024-07-24 21:57:25.056775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.017 [2024-07-24 21:57:25.067871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.017 [2024-07-24 21:57:25.067890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.017 [2024-07-24 21:57:25.081666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.017 [2024-07-24 21:57:25.081686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.017 [2024-07-24 21:57:25.094920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.017 [2024-07-24 21:57:25.094940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.017 [2024-07-24 21:57:25.108820] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.017 [2024-07-24 21:57:25.108839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.017 [2024-07-24 21:57:25.119346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.017 [2024-07-24 21:57:25.119364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.017 [2024-07-24 21:57:25.133248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.017 [2024-07-24 21:57:25.133268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.017 [2024-07-24 21:57:25.147028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.017 [2024-07-24 21:57:25.147048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.017 [2024-07-24 21:57:25.160294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.017 [2024-07-24 21:57:25.160313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.017 [2024-07-24 21:57:25.173967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.017 [2024-07-24 21:57:25.173986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.017 [2024-07-24 21:57:25.187502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.017 [2024-07-24 21:57:25.187522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.017 [2024-07-24 21:57:25.199811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.017 [2024-07-24 21:57:25.199830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.017 [2024-07-24 21:57:25.213862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.017 [2024-07-24 21:57:25.213882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.017 [2024-07-24 21:57:25.227816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.017 [2024-07-24 21:57:25.227836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.276 [2024-07-24 21:57:25.241518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.276 [2024-07-24 21:57:25.241538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.276 [2024-07-24 21:57:25.256105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.276 [2024-07-24 21:57:25.256124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.276 [2024-07-24 21:57:25.271655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.276 [2024-07-24 21:57:25.271674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.276 [2024-07-24 21:57:25.285451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.276 [2024-07-24 21:57:25.285471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.276 [2024-07-24 21:57:25.299173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.276 [2024-07-24 21:57:25.299192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.276 [2024-07-24 21:57:25.312727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.276 [2024-07-24 21:57:25.312763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.276 [2024-07-24 21:57:25.326754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.276 [2024-07-24 21:57:25.326774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.276 [2024-07-24 21:57:25.337665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.276 [2024-07-24 21:57:25.337684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.276 [2024-07-24 21:57:25.352024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.276 [2024-07-24 21:57:25.352044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.276 [2024-07-24 21:57:25.365930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.276 [2024-07-24 21:57:25.365949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.276 [2024-07-24 21:57:25.377497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.276 [2024-07-24 21:57:25.377516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.276 [2024-07-24 21:57:25.391776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.276 [2024-07-24 21:57:25.391795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.276 [2024-07-24 21:57:25.402424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.276 [2024-07-24 21:57:25.402443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.276 [2024-07-24 21:57:25.416847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.276 [2024-07-24 21:57:25.416867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.276 [2024-07-24 21:57:25.427457] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.276 [2024-07-24 21:57:25.427476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.276 [2024-07-24 21:57:25.441651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.276 [2024-07-24 21:57:25.441670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.277 [2024-07-24 21:57:25.452753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.277 [2024-07-24 21:57:25.452772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.277 [2024-07-24 21:57:25.466669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.277 [2024-07-24 21:57:25.466688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.277 [2024-07-24 21:57:25.480059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.277 [2024-07-24 21:57:25.480079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.535 [2024-07-24 21:57:25.494306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.535 [2024-07-24 21:57:25.494325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.535 [2024-07-24 21:57:25.509570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.535 [2024-07-24 21:57:25.509589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.535 [2024-07-24 21:57:25.523392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.535 [2024-07-24 21:57:25.523412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.535 [2024-07-24 21:57:25.537238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.535 [2024-07-24 21:57:25.537257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.535 [2024-07-24 21:57:25.552481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.535 [2024-07-24 21:57:25.552500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.535 [2024-07-24 21:57:25.566669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.535 [2024-07-24 21:57:25.566688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.535 [2024-07-24 21:57:25.581338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.535 [2024-07-24 21:57:25.581357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.535 [2024-07-24 21:57:25.592490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.535 [2024-07-24 21:57:25.592511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.535 [2024-07-24 21:57:25.606513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.535 [2024-07-24 21:57:25.606533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.535 [2024-07-24 21:57:25.620391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.535 [2024-07-24 21:57:25.620410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.535 [2024-07-24 21:57:25.634294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.535 [2024-07-24 21:57:25.634314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.535 [2024-07-24 21:57:25.645823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.535 [2024-07-24 21:57:25.645853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.535 [2024-07-24 21:57:25.659978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.535 [2024-07-24 21:57:25.659997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.535 [2024-07-24 21:57:25.673770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.535 [2024-07-24 21:57:25.673790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.535 [2024-07-24 21:57:25.687099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.535 [2024-07-24 21:57:25.687119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.535 [2024-07-24 21:57:25.700974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.535 [2024-07-24 21:57:25.700993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.535 [2024-07-24 21:57:25.711651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.535 [2024-07-24 21:57:25.711670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.535 [2024-07-24 21:57:25.725886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.535 [2024-07-24 21:57:25.725906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.535 [2024-07-24 21:57:25.739199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.535 [2024-07-24 21:57:25.739221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.794 [2024-07-24 21:57:25.752581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.794 [2024-07-24 21:57:25.752602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.794 [2024-07-24 21:57:25.765858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.794 [2024-07-24 21:57:25.765878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.794 [2024-07-24 21:57:25.779496] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.794 [2024-07-24 21:57:25.779516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.794 [2024-07-24 21:57:25.793434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.794 [2024-07-24 21:57:25.793454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.794 [2024-07-24 21:57:25.807099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.794 [2024-07-24 21:57:25.807120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.794 [2024-07-24 21:57:25.821152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.794 [2024-07-24 21:57:25.821172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.794 [2024-07-24 21:57:25.834911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.794 [2024-07-24 21:57:25.834930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.794 [2024-07-24 21:57:25.848712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.794 [2024-07-24 21:57:25.848738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.794 [2024-07-24 21:57:25.862184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.794 [2024-07-24 21:57:25.862204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.794 [2024-07-24 21:57:25.875914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.794 [2024-07-24 21:57:25.875938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.794 [2024-07-24 21:57:25.889799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.794 [2024-07-24 21:57:25.889819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.794 [2024-07-24 21:57:25.903137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.794 [2024-07-24 21:57:25.903157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.794 [2024-07-24 21:57:25.916642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.794 [2024-07-24 21:57:25.916661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.794 [2024-07-24 21:57:25.930064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.794 [2024-07-24 21:57:25.930084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.794 [2024-07-24 21:57:25.943265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.794 [2024-07-24 21:57:25.943289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.794 [2024-07-24 21:57:25.956798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.794 [2024-07-24 21:57:25.956820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.794 [2024-07-24 21:57:25.970428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.794 [2024-07-24 21:57:25.970449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.794 [2024-07-24 21:57:25.984584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.794 [2024-07-24 21:57:25.984603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.794 [2024-07-24 21:57:25.999917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.794 [2024-07-24 21:57:25.999937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.054 [2024-07-24 21:57:26.013647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.054 [2024-07-24 21:57:26.013669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.054 [2024-07-24 21:57:26.027038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.054 [2024-07-24 21:57:26.027059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.054 [2024-07-24 21:57:26.040776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.054 [2024-07-24 21:57:26.040798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.054 [2024-07-24 21:57:26.054108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.054 [2024-07-24 21:57:26.054129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.054 [2024-07-24 21:57:26.068015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.054 [2024-07-24 21:57:26.068035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.054 [2024-07-24 21:57:26.081343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.054 [2024-07-24 21:57:26.081364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.054 [2024-07-24 21:57:26.094624] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.054 [2024-07-24 21:57:26.094645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.054 [2024-07-24 21:57:26.108033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.054 [2024-07-24 21:57:26.108053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.054 [2024-07-24 21:57:26.121649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.054 [2024-07-24 21:57:26.121669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.054 [2024-07-24 21:57:26.135215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.054 [2024-07-24 21:57:26.135239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.054 [2024-07-24 21:57:26.148737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.054 [2024-07-24 21:57:26.148757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.054 [2024-07-24 21:57:26.162448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.054 [2024-07-24 21:57:26.162467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.054 [2024-07-24 21:57:26.175789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.054 [2024-07-24 21:57:26.175809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.054 [2024-07-24 21:57:26.189335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.054 [2024-07-24 21:57:26.189355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.054 [2024-07-24 21:57:26.203215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.054 [2024-07-24 21:57:26.203235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.054 [2024-07-24 21:57:26.216711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.054 [2024-07-24 21:57:26.216737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.054 [2024-07-24 21:57:26.230063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.054 [2024-07-24 21:57:26.230083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.054 [2024-07-24 21:57:26.243531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.054 [2024-07-24 21:57:26.243552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.054 [2024-07-24 21:57:26.257024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.054 [2024-07-24 21:57:26.257044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.313 [2024-07-24 21:57:26.270464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.313 [2024-07-24 21:57:26.270485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.313 [2024-07-24 21:57:26.283639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.313 [2024-07-24 21:57:26.283659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.313 [2024-07-24 21:57:26.297054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.313 [2024-07-24 21:57:26.297074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.313 [2024-07-24 21:57:26.310862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.313 [2024-07-24 21:57:26.310882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.313 [2024-07-24 21:57:26.322344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.313 [2024-07-24 21:57:26.322364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.313 [2024-07-24 21:57:26.336391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.313 [2024-07-24 21:57:26.336415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.313 [2024-07-24 21:57:26.349620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.313 [2024-07-24 21:57:26.349641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.313 [2024-07-24 21:57:26.363061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.313 [2024-07-24 21:57:26.363081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.313 [2024-07-24 21:57:26.376353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.313 [2024-07-24 21:57:26.376373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.313 [2024-07-24 21:57:26.390058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.313 [2024-07-24 21:57:26.390081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.313 [2024-07-24 21:57:26.403272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.313 [2024-07-24 21:57:26.403292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.313 [2024-07-24 21:57:26.416806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.313 [2024-07-24 21:57:26.416826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.313 [2024-07-24 21:57:26.430311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.313 [2024-07-24 21:57:26.430331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.313 [2024-07-24 21:57:26.443731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.313 [2024-07-24 21:57:26.443768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.313 [2024-07-24 21:57:26.457027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.313 [2024-07-24 21:57:26.457047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.313 [2024-07-24 21:57:26.470236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.313 [2024-07-24 21:57:26.470256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.313 [2024-07-24 21:57:26.484011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.313 [2024-07-24 21:57:26.484031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.313 [2024-07-24 21:57:26.498115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.313 [2024-07-24 21:57:26.498134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.313 [2024-07-24 21:57:26.513841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.313 [2024-07-24 21:57:26.513861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.573 [2024-07-24 21:57:26.527653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.573 [2024-07-24 21:57:26.527675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.573 [2024-07-24 21:57:26.541162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.573 [2024-07-24 21:57:26.541183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.573 [2024-07-24 21:57:26.554690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.573 [2024-07-24 21:57:26.554709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.573 [2024-07-24 21:57:26.567961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.573 [2024-07-24 21:57:26.567982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.573 [2024-07-24 21:57:26.581883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.573 [2024-07-24 21:57:26.581903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.573 [2024-07-24 21:57:26.595198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.573 [2024-07-24 21:57:26.595218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.573 [2024-07-24 21:57:26.608831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.573 [2024-07-24 21:57:26.608850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.573 [2024-07-24 21:57:26.622341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.573 [2024-07-24 21:57:26.622363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.573 [2024-07-24 21:57:26.635933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.573 [2024-07-24 21:57:26.635954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.573 [2024-07-24 21:57:26.649897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.573 [2024-07-24 21:57:26.649921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.573 [2024-07-24 21:57:26.660860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.573 [2024-07-24 21:57:26.660880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.573 [2024-07-24 21:57:26.674863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.573 [2024-07-24 21:57:26.674883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.573 [2024-07-24 21:57:26.689182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.573 [2024-07-24 21:57:26.689202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.573 [2024-07-24 21:57:26.700987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.573 [2024-07-24 21:57:26.701006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.573 [2024-07-24 21:57:26.715813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.573 [2024-07-24 21:57:26.715832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.573 [2024-07-24 21:57:26.731083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.573 [2024-07-24 21:57:26.731103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.573 [2024-07-24 21:57:26.744977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.573 [2024-07-24 21:57:26.744997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.573 [2024-07-24 21:57:26.758516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.573 [2024-07-24 21:57:26.758536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.573 [2024-07-24 21:57:26.772333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.573 [2024-07-24 21:57:26.772352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.832 [2024-07-24 21:57:26.787563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.832 [2024-07-24 21:57:26.787584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.832 [2024-07-24 21:57:26.801814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.832 [2024-07-24 21:57:26.801845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.832 [2024-07-24 21:57:26.812959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.832 [2024-07-24 21:57:26.812980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.832 [2024-07-24 21:57:26.826892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.832 [2024-07-24 21:57:26.826912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.832 [2024-07-24 21:57:26.839963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.832 [2024-07-24 21:57:26.839983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.832 [2024-07-24 21:57:26.853757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.832 [2024-07-24 21:57:26.853776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.832 [2024-07-24 21:57:26.867391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.832 [2024-07-24 21:57:26.867412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.832 [2024-07-24 21:57:26.878550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.832 [2024-07-24 21:57:26.878570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.832 [2024-07-24 21:57:26.892402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.832 [2024-07-24 21:57:26.892422] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.832 [2024-07-24 21:57:26.906176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.832 [2024-07-24 21:57:26.906196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.832 [2024-07-24 21:57:26.917277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.832 [2024-07-24 21:57:26.917297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.832 [2024-07-24 21:57:26.931220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.833 [2024-07-24 21:57:26.931240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.833 [2024-07-24 21:57:26.944654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.833 [2024-07-24 21:57:26.944674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.833 [2024-07-24 21:57:26.958479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.833 [2024-07-24 21:57:26.958498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.833 [2024-07-24 21:57:26.972167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.833 [2024-07-24 21:57:26.972187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.833 [2024-07-24 21:57:26.986561] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.833 [2024-07-24 21:57:26.986580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.833 [2024-07-24 21:57:27.002129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.833 [2024-07-24 21:57:27.002149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.833 [2024-07-24 21:57:27.016063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.833 [2024-07-24 21:57:27.016084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.833 [2024-07-24 21:57:27.029784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.833 [2024-07-24 21:57:27.029804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.833 [2024-07-24 21:57:27.043207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.833 [2024-07-24 21:57:27.043227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.092 [2024-07-24 21:57:27.056797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.092 [2024-07-24 21:57:27.056817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.092 [2024-07-24 21:57:27.070202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.092 [2024-07-24 21:57:27.070222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.092 [2024-07-24 21:57:27.083675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.092 [2024-07-24 21:57:27.083696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.092 [2024-07-24 21:57:27.097130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.092 [2024-07-24 21:57:27.097150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.092 [2024-07-24 21:57:27.110444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.092 [2024-07-24 21:57:27.110464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.092 [2024-07-24 21:57:27.124232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.092 [2024-07-24 21:57:27.124252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.092 [2024-07-24 21:57:27.137779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.092 [2024-07-24 21:57:27.137799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.092 [2024-07-24 21:57:27.151613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.092 [2024-07-24 21:57:27.151632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.092 [2024-07-24 21:57:27.164790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.092 [2024-07-24 21:57:27.164810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.092 [2024-07-24 21:57:27.178653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.092 [2024-07-24 21:57:27.178674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.092 [2024-07-24 21:57:27.192386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.092 [2024-07-24 21:57:27.192405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.092 [2024-07-24 21:57:27.205753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.092 [2024-07-24 21:57:27.205772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.092 [2024-07-24 21:57:27.220342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.092 [2024-07-24 21:57:27.220362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.092 [2024-07-24 21:57:27.235514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.092 [2024-07-24 21:57:27.235534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.092 [2024-07-24 21:57:27.249681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.092 [2024-07-24 21:57:27.249701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.092 [2024-07-24 21:57:27.261790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.092 [2024-07-24 21:57:27.261810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.092 [2024-07-24 21:57:27.275611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.092 [2024-07-24 21:57:27.275632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.092 [2024-07-24 21:57:27.289078] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.092 [2024-07-24 21:57:27.289098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.092 [2024-07-24 21:57:27.302755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.092 [2024-07-24 21:57:27.302775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.352 [2024-07-24 21:57:27.315871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.352 [2024-07-24 21:57:27.315893] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.352 [2024-07-24 21:57:27.329461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.352 [2024-07-24 21:57:27.329482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.352 [2024-07-24 21:57:27.342653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.352 [2024-07-24 21:57:27.342673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.352 [2024-07-24 21:57:27.356881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.352 [2024-07-24 21:57:27.356902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.352 [2024-07-24 21:57:27.370687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.352 [2024-07-24 21:57:27.370707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.352 [2024-07-24 21:57:27.384273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.352 [2024-07-24 21:57:27.384292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.352 [2024-07-24 21:57:27.397592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.352 [2024-07-24 21:57:27.397612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.352 [2024-07-24 21:57:27.411181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.352 [2024-07-24 21:57:27.411201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.352 [2024-07-24 21:57:27.424992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.352 [2024-07-24 21:57:27.425013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.352 [2024-07-24 21:57:27.438253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.352 [2024-07-24 21:57:27.438274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.352 [2024-07-24 21:57:27.451904] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.352 [2024-07-24 21:57:27.451923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.352 [2024-07-24 21:57:27.465294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.352 [2024-07-24 21:57:27.465314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.352 [2024-07-24 21:57:27.479026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.352 [2024-07-24 21:57:27.479046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.352 [2024-07-24 21:57:27.492552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.352 [2024-07-24 21:57:27.492572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.352 [2024-07-24 21:57:27.505898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.352 [2024-07-24 21:57:27.505918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.352 [2024-07-24 21:57:27.519616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.352 [2024-07-24 21:57:27.519636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.352 [2024-07-24 21:57:27.533275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.352 [2024-07-24 21:57:27.533295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.352 [2024-07-24 21:57:27.546501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.352 [2024-07-24 21:57:27.546521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.352 [2024-07-24 21:57:27.559995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.352 [2024-07-24 21:57:27.560015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.612 [2024-07-24 21:57:27.573596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.612 [2024-07-24 21:57:27.573617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.612 [2024-07-24 21:57:27.587317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.612 [2024-07-24 21:57:27.587338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.612 [2024-07-24 21:57:27.600949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.612 [2024-07-24 21:57:27.600971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.612 [2024-07-24 21:57:27.614640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.612 [2024-07-24 21:57:27.614660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.612 [2024-07-24 21:57:27.628438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.612 [2024-07-24 21:57:27.628459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.612 [2024-07-24 21:57:27.642417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.612 [2024-07-24 21:57:27.642447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.612 [2024-07-24 21:57:27.653573] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.612 [2024-07-24 21:57:27.653594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.612 [2024-07-24 21:57:27.667444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.612 [2024-07-24 21:57:27.667465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.612 [2024-07-24 21:57:27.680872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.612 [2024-07-24 21:57:27.680893] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.612 [2024-07-24 21:57:27.694642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.612 [2024-07-24 21:57:27.694662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.612 [2024-07-24 21:57:27.708343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.612 [2024-07-24 21:57:27.708364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.612 [2024-07-24 21:57:27.721683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.612 [2024-07-24 21:57:27.721705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.612 [2024-07-24 21:57:27.735273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.612 [2024-07-24 21:57:27.735294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.612 [2024-07-24 21:57:27.748417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.612 [2024-07-24 21:57:27.748438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.612 [2024-07-24 21:57:27.762133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.612 [2024-07-24 21:57:27.762153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.612 [2024-07-24 21:57:27.775969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.612 [2024-07-24 21:57:27.775990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.612 [2024-07-24 21:57:27.789165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.612 [2024-07-24 21:57:27.789185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.612 [2024-07-24 21:57:27.802606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.612 [2024-07-24 21:57:27.802626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.612 [2024-07-24 21:57:27.816300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.612 [2024-07-24 21:57:27.816321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.871 [2024-07-24 21:57:27.830073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.871 [2024-07-24 21:57:27.830093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.872 [2024-07-24 21:57:27.843202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.872 [2024-07-24 21:57:27.843222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.872 [2024-07-24 21:57:27.855765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.872 [2024-07-24 21:57:27.855785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.872 00:09:48.872 Latency(us) 00:09:48.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:48.872 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:48.872 Nvme1n1 : 5.01 17353.93 135.58 0.00 0.00 7368.08 2477.26 21705.52 00:09:48.872 =================================================================================================================== 00:09:48.872 Total : 17353.93 135.58 0.00 0.00 7368.08 2477.26 21705.52 00:09:48.872 [2024-07-24 21:57:27.865458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.872 [2024-07-24 21:57:27.865474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.872 [2024-07-24 21:57:27.877480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.872 [2024-07-24 21:57:27.877498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.872 [2024-07-24 21:57:27.889519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.872 [2024-07-24 21:57:27.889538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.872 [2024-07-24 21:57:27.901547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.872 [2024-07-24 21:57:27.901562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.872 [2024-07-24 21:57:27.913580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.872 [2024-07-24 21:57:27.913593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.872 [2024-07-24 21:57:27.925609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.872 [2024-07-24 21:57:27.925624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.872 [2024-07-24 21:57:27.937638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.872 [2024-07-24 21:57:27.937652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.872 [2024-07-24 21:57:27.949671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.872 [2024-07-24 21:57:27.949685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.872 [2024-07-24 21:57:27.961700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.872 [2024-07-24 21:57:27.961713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.872 [2024-07-24 21:57:27.973737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.872 [2024-07-24 21:57:27.973748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.872 [2024-07-24 21:57:27.985774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.872 [2024-07-24 21:57:27.985787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.872 [2024-07-24 21:57:27.997799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.872 [2024-07-24 21:57:27.997810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.872 [2024-07-24 21:57:28.009832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.872 [2024-07-24 21:57:28.009845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.872 [2024-07-24 21:57:28.021864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.872 [2024-07-24 21:57:28.021878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.872 [2024-07-24 21:57:28.033906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.872 [2024-07-24 21:57:28.033918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2575544) - No such process 00:09:48.872 21:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2575544 00:09:48.872 21:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.872 21:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.872 21:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:48.872 21:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.872 21:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:48.872 21:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.872 21:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:48.872 delay0 00:09:48.872 21:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.872 21:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:48.872 21:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.872 21:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:48.872 21:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.872 21:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:49.131 EAL: No free 2048 kB hugepages reported on node 1 00:09:49.131 [2024-07-24 21:57:28.122080] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:55.699 Initializing NVMe Controllers 00:09:55.699 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:55.699 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:55.699 Initialization complete. Launching workers. 00:09:55.699 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 195 00:09:55.699 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 478, failed to submit 37 00:09:55.699 success 338, unsuccess 140, failed 0 00:09:55.699 21:57:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:55.699 21:57:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:55.699 21:57:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:55.699 21:57:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:09:55.699 21:57:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:55.699 21:57:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:09:55.699 21:57:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:55.699 21:57:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:55.699 rmmod nvme_tcp 00:09:55.699 rmmod nvme_fabrics 00:09:55.699 rmmod nvme_keyring 00:09:55.699 21:57:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:55.699 21:57:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:09:55.699 21:57:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:09:55.699 21:57:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2573672 ']' 00:09:55.699 21:57:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2573672 00:09:55.699 21:57:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 2573672 ']' 00:09:55.699 21:57:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 2573672 00:09:55.700 21:57:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:09:55.700 21:57:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:55.700 21:57:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2573672 00:09:55.700 21:57:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:55.700 21:57:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:55.700 21:57:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2573672' 00:09:55.700 killing process with pid 2573672 00:09:55.700 21:57:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 2573672 00:09:55.700 21:57:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 2573672 00:09:55.700 21:57:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:55.700 21:57:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:55.700 21:57:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:55.700 21:57:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:55.700 21:57:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:55.700 21:57:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.700 21:57:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.700 21:57:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.642 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:57.642 00:09:57.642 real 0m32.735s 00:09:57.642 user 0m41.682s 00:09:57.642 sys 0m13.380s 00:09:57.642 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:57.642 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:57.642 ************************************ 00:09:57.642 END TEST nvmf_zcopy 00:09:57.642 ************************************ 00:09:57.642 21:57:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:57.642 21:57:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:57.642 21:57:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:57.642 21:57:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:57.642 ************************************ 00:09:57.642 START TEST nvmf_nmic 00:09:57.642 ************************************ 00:09:57.642 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:57.642 * Looking for test storage... 00:09:57.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:57.642 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:57.642 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:57.642 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:57.642 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:57.642 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:57.643 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:57.643 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:57.643 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:57.643 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:57.643 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:57.643 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:57.643 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:57.643 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:09:57.643 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:09:57.643 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:57.643 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:57.643 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:57.643 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:57.643 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:57.643 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.643 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.643 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.643 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.643 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.643 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.643 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:57.643 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.643 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:09:57.643 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:57.643 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:57.643 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:57.643 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:57.643 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:57.643 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:57.643 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:57.643 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:57.643 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:57.643 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:57.643 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:57.643 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:57.643 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:57.643 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:57.643 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:57.643 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:57.643 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.643 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.643 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.643 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:57.643 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:57.643 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:09:57.643 21:57:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:04.217 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:04.218 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:04.218 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:04.218 Found net devices under 0000:af:00.0: cvl_0_0 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:04.218 Found net devices under 0000:af:00.1: cvl_0_1 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:04.218 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:04.477 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:04.477 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:04.477 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:04.477 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:04.477 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:10:04.477 00:10:04.477 --- 10.0.0.2 ping statistics --- 00:10:04.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.477 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:10:04.477 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:04.477 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:04.477 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:10:04.477 00:10:04.477 --- 10.0.0.1 ping statistics --- 00:10:04.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.477 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:10:04.477 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:04.477 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:10:04.477 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:04.477 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:04.477 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:04.477 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:04.477 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:04.477 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:04.477 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:04.477 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:04.477 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:04.477 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:04.477 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:04.477 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2581328 00:10:04.477 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2581328 00:10:04.477 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 2581328 ']' 00:10:04.477 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.477 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:04.477 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.477 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:04.477 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:04.477 21:57:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:04.477 [2024-07-24 21:57:43.545306] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:10:04.477 [2024-07-24 21:57:43.545353] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:04.477 EAL: No free 2048 kB hugepages reported on node 1 00:10:04.477 [2024-07-24 21:57:43.618885] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:04.736 [2024-07-24 21:57:43.694260] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:04.736 [2024-07-24 21:57:43.694299] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:04.736 [2024-07-24 21:57:43.694309] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:04.736 [2024-07-24 21:57:43.694317] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:04.736 [2024-07-24 21:57:43.694341] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:04.736 [2024-07-24 21:57:43.694388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:04.736 [2024-07-24 21:57:43.694407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:04.736 [2024-07-24 21:57:43.694518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:04.736 [2024-07-24 21:57:43.694520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.304 21:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:05.304 21:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:10:05.304 21:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:05.304 21:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:05.304 21:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:05.304 21:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:05.304 21:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:05.304 21:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.304 21:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:05.304 [2024-07-24 21:57:44.407958] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:05.304 21:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.304 21:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:05.304 21:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.304 21:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:05.305 Malloc0 00:10:05.305 21:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.305 21:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:05.305 21:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.305 21:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:05.305 21:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.305 21:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:05.305 21:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.305 21:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:05.305 21:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.305 21:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:05.305 21:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.305 21:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:05.305 [2024-07-24 21:57:44.462415] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:05.305 21:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.305 21:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:05.305 test case1: single bdev can't be used in multiple subsystems 00:10:05.305 21:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:05.305 21:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.305 21:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:05.305 21:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.305 21:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:05.305 21:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.305 21:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:05.305 21:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.305 21:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:05.305 21:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:05.305 21:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.305 21:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:05.305 [2024-07-24 21:57:44.486321] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:05.305 [2024-07-24 21:57:44.486340] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:05.305 [2024-07-24 21:57:44.486349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.305 request: 00:10:05.305 { 00:10:05.305 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:05.305 "namespace": { 00:10:05.305 "bdev_name": "Malloc0", 00:10:05.305 "no_auto_visible": false 00:10:05.305 }, 00:10:05.305 "method": "nvmf_subsystem_add_ns", 00:10:05.305 "req_id": 1 00:10:05.305 } 00:10:05.305 Got JSON-RPC error response 00:10:05.305 response: 00:10:05.305 { 00:10:05.305 "code": -32602, 00:10:05.305 "message": "Invalid parameters" 00:10:05.305 } 00:10:05.305 21:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:05.305 21:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:05.305 21:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:05.305 21:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:05.305 Adding namespace failed - expected result. 00:10:05.305 21:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:05.305 test case2: host connect to nvmf target in multiple paths 00:10:05.305 21:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:05.305 21:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.305 21:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:05.305 [2024-07-24 21:57:44.502490] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:05.305 21:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.305 21:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:06.683 21:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:08.060 21:57:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:08.060 21:57:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:08.060 21:57:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:08.060 21:57:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:08.060 21:57:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:09.966 21:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:09.966 21:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:09.966 21:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:09.966 21:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:09.966 21:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:09.966 21:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:09.966 21:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:09.966 [global] 00:10:09.966 thread=1 00:10:09.966 invalidate=1 00:10:09.966 rw=write 00:10:09.966 time_based=1 00:10:09.966 runtime=1 00:10:09.966 ioengine=libaio 00:10:09.966 direct=1 00:10:09.966 bs=4096 00:10:09.966 iodepth=1 00:10:09.966 norandommap=0 00:10:09.966 numjobs=1 00:10:09.966 00:10:09.966 verify_dump=1 00:10:09.966 verify_backlog=512 00:10:09.966 verify_state_save=0 00:10:09.966 do_verify=1 00:10:09.966 verify=crc32c-intel 00:10:09.966 [job0] 00:10:09.966 filename=/dev/nvme0n1 00:10:09.966 Could not set queue depth (nvme0n1) 00:10:10.223 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:10.223 fio-3.35 00:10:10.223 Starting 1 thread 00:10:11.603 00:10:11.603 job0: (groupid=0, jobs=1): err= 0: pid=2582552: Wed Jul 24 21:57:50 2024 00:10:11.603 read: IOPS=1231, BW=4927KiB/s (5045kB/s)(4932KiB/1001msec) 00:10:11.603 slat (nsec): min=9371, max=58751, avg=10362.27, stdev=2077.83 00:10:11.603 clat (usec): min=397, max=795, avg=488.91, stdev=24.66 00:10:11.603 lat (usec): min=407, max=805, avg=499.28, stdev=24.81 00:10:11.603 clat percentiles (usec): 00:10:11.603 | 1.00th=[ 416], 5.00th=[ 441], 10.00th=[ 465], 20.00th=[ 469], 00:10:11.603 | 30.00th=[ 474], 40.00th=[ 482], 50.00th=[ 498], 60.00th=[ 502], 00:10:11.603 | 70.00th=[ 506], 80.00th=[ 510], 90.00th=[ 515], 95.00th=[ 519], 00:10:11.603 | 99.00th=[ 523], 99.50th=[ 529], 99.90th=[ 611], 99.95th=[ 799], 00:10:11.603 | 99.99th=[ 799] 00:10:11.603 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:11.603 slat (nsec): min=12615, max=97152, avg=13820.50, stdev=3231.49 00:10:11.603 clat (usec): min=175, max=3266, avg=231.19, stdev=79.65 00:10:11.603 lat (usec): min=210, max=3280, avg=245.01, stdev=79.77 00:10:11.603 clat percentiles (usec): 00:10:11.603 | 1.00th=[ 202], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 215], 00:10:11.603 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 235], 00:10:11.603 | 70.00th=[ 241], 80.00th=[ 247], 90.00th=[ 251], 95.00th=[ 255], 00:10:11.604 | 99.00th=[ 265], 99.50th=[ 277], 99.90th=[ 424], 99.95th=[ 3261], 00:10:11.604 | 99.99th=[ 3261] 00:10:11.604 bw ( KiB/s): min= 7712, max= 7712, per=100.00%, avg=7712.00, stdev= 0.00, samples=1 00:10:11.604 iops : min= 1928, max= 1928, avg=1928.00, stdev= 0.00, samples=1 00:10:11.604 lat (usec) : 250=49.33%, 500=32.03%, 750=18.56%, 1000=0.04% 00:10:11.604 lat (msec) : 4=0.04% 00:10:11.604 cpu : usr=4.20%, sys=3.70%, ctx=2770, majf=0, minf=2 00:10:11.604 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:11.604 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.604 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.604 issued rwts: total=1233,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.604 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:11.604 00:10:11.604 Run status group 0 (all jobs): 00:10:11.604 READ: bw=4927KiB/s (5045kB/s), 4927KiB/s-4927KiB/s (5045kB/s-5045kB/s), io=4932KiB (5050kB), run=1001-1001msec 00:10:11.604 WRITE: bw=6138KiB/s (6285kB/s), 6138KiB/s-6138KiB/s (6285kB/s-6285kB/s), io=6144KiB (6291kB), run=1001-1001msec 00:10:11.604 00:10:11.604 Disk stats (read/write): 00:10:11.604 nvme0n1: ios=1076/1536, merge=0/0, ticks=529/338, in_queue=867, util=91.88% 00:10:11.604 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:11.604 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:11.604 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:11.604 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:11.604 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:11.604 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:11.604 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:11.604 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:11.604 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:11.604 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:11.604 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:11.604 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:11.604 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:10:11.604 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:11.604 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:10:11.604 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:11.604 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:11.604 rmmod nvme_tcp 00:10:11.604 rmmod nvme_fabrics 00:10:11.604 rmmod nvme_keyring 00:10:11.604 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:11.604 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:10:11.604 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:10:11.604 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2581328 ']' 00:10:11.604 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2581328 00:10:11.604 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 2581328 ']' 00:10:11.604 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 2581328 00:10:11.604 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:10:11.604 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:11.604 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2581328 00:10:11.863 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:11.863 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:11.863 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2581328' 00:10:11.863 killing process with pid 2581328 00:10:11.863 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 2581328 00:10:11.863 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 2581328 00:10:11.863 21:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:11.863 21:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:11.863 21:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:11.863 21:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:11.863 21:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:11.863 21:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.863 21:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.863 21:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:14.403 00:10:14.403 real 0m16.435s 00:10:14.403 user 0m39.046s 00:10:14.403 sys 0m6.201s 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.403 ************************************ 00:10:14.403 END TEST nvmf_nmic 00:10:14.403 ************************************ 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:14.403 ************************************ 00:10:14.403 START TEST nvmf_fio_target 00:10:14.403 ************************************ 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:14.403 * Looking for test storage... 00:10:14.403 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:10:14.403 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.013 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:21.013 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:10:21.013 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:21.013 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:21.013 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:21.013 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:21.013 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:21.013 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:10:21.013 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:21.013 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:10:21.013 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:10:21.013 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:10:21.013 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:10:21.013 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:10:21.013 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:10:21.013 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:21.013 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:21.013 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:21.013 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:21.013 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:21.013 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:21.014 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:21.014 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:21.014 Found net devices under 0000:af:00.0: cvl_0_0 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:21.014 Found net devices under 0000:af:00.1: cvl_0_1 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:21.014 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:21.014 21:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:21.014 21:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:21.014 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:21.014 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:10:21.014 00:10:21.014 --- 10.0.0.2 ping statistics --- 00:10:21.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.014 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:10:21.014 21:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:21.014 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:21.014 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:10:21.014 00:10:21.014 --- 10.0.0.1 ping statistics --- 00:10:21.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.014 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:10:21.014 21:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:21.014 21:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:10:21.014 21:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:21.014 21:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:21.014 21:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:21.014 21:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:21.014 21:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:21.014 21:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:21.014 21:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:21.014 21:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:21.014 21:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:21.014 21:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:21.014 21:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.014 21:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2586470 00:10:21.014 21:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:21.014 21:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2586470 00:10:21.014 21:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 2586470 ']' 00:10:21.014 21:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.014 21:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:21.014 21:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.015 21:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:21.015 21:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.015 [2024-07-24 21:58:00.135588] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:10:21.015 [2024-07-24 21:58:00.135637] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:21.015 EAL: No free 2048 kB hugepages reported on node 1 00:10:21.015 [2024-07-24 21:58:00.210304] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:21.274 [2024-07-24 21:58:00.285252] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:21.274 [2024-07-24 21:58:00.285292] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:21.274 [2024-07-24 21:58:00.285302] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:21.274 [2024-07-24 21:58:00.285311] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:21.274 [2024-07-24 21:58:00.285321] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:21.274 [2024-07-24 21:58:00.285362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:21.274 [2024-07-24 21:58:00.285457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:21.274 [2024-07-24 21:58:00.285561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:21.274 [2024-07-24 21:58:00.285563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.843 21:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:21.843 21:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:10:21.843 21:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:21.843 21:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:21.843 21:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.843 21:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:21.843 21:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:22.102 [2024-07-24 21:58:01.141381] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:22.102 21:58:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:22.361 21:58:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:22.361 21:58:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:22.620 21:58:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:22.620 21:58:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:22.620 21:58:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:22.620 21:58:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:22.879 21:58:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:22.879 21:58:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:23.138 21:58:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:23.138 21:58:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:23.138 21:58:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:23.398 21:58:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:23.399 21:58:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:23.658 21:58:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:23.658 21:58:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:23.931 21:58:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:23.931 21:58:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:23.931 21:58:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:24.191 21:58:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:24.191 21:58:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:24.450 21:58:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:24.450 [2024-07-24 21:58:03.629603] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:24.450 21:58:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:24.710 21:58:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:24.969 21:58:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:26.347 21:58:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:26.347 21:58:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:26.347 21:58:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:26.347 21:58:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:26.347 21:58:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:26.347 21:58:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:28.253 21:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:28.253 21:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:28.253 21:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:28.253 21:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:28.253 21:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:28.253 21:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:28.253 21:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:28.253 [global] 00:10:28.254 thread=1 00:10:28.254 invalidate=1 00:10:28.254 rw=write 00:10:28.254 time_based=1 00:10:28.254 runtime=1 00:10:28.254 ioengine=libaio 00:10:28.254 direct=1 00:10:28.254 bs=4096 00:10:28.254 iodepth=1 00:10:28.254 norandommap=0 00:10:28.254 numjobs=1 00:10:28.254 00:10:28.254 verify_dump=1 00:10:28.254 verify_backlog=512 00:10:28.254 verify_state_save=0 00:10:28.254 do_verify=1 00:10:28.254 verify=crc32c-intel 00:10:28.254 [job0] 00:10:28.254 filename=/dev/nvme0n1 00:10:28.254 [job1] 00:10:28.254 filename=/dev/nvme0n2 00:10:28.254 [job2] 00:10:28.254 filename=/dev/nvme0n3 00:10:28.254 [job3] 00:10:28.254 filename=/dev/nvme0n4 00:10:28.538 Could not set queue depth (nvme0n1) 00:10:28.538 Could not set queue depth (nvme0n2) 00:10:28.538 Could not set queue depth (nvme0n3) 00:10:28.538 Could not set queue depth (nvme0n4) 00:10:28.803 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:28.803 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:28.803 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:28.803 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:28.803 fio-3.35 00:10:28.803 Starting 4 threads 00:10:30.207 00:10:30.207 job0: (groupid=0, jobs=1): err= 0: pid=2587976: Wed Jul 24 21:58:09 2024 00:10:30.207 read: IOPS=425, BW=1702KiB/s (1743kB/s)(1704KiB/1001msec) 00:10:30.207 slat (nsec): min=8670, max=23982, avg=9778.77, stdev=2404.62 00:10:30.207 clat (usec): min=320, max=41672, avg=2018.11, stdev=7980.35 00:10:30.207 lat (usec): min=330, max=41681, avg=2027.89, stdev=7981.20 00:10:30.207 clat percentiles (usec): 00:10:30.207 | 1.00th=[ 330], 5.00th=[ 343], 10.00th=[ 351], 20.00th=[ 355], 00:10:30.207 | 30.00th=[ 359], 40.00th=[ 363], 50.00th=[ 371], 60.00th=[ 379], 00:10:30.207 | 70.00th=[ 396], 80.00th=[ 429], 90.00th=[ 510], 95.00th=[ 660], 00:10:30.207 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:10:30.207 | 99.99th=[41681] 00:10:30.207 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:10:30.207 slat (nsec): min=11677, max=43256, avg=13135.93, stdev=2290.50 00:10:30.207 clat (usec): min=166, max=386, avg=247.38, stdev=42.39 00:10:30.207 lat (usec): min=179, max=429, avg=260.52, stdev=42.53 00:10:30.207 clat percentiles (usec): 00:10:30.207 | 1.00th=[ 169], 5.00th=[ 176], 10.00th=[ 184], 20.00th=[ 198], 00:10:30.207 | 30.00th=[ 221], 40.00th=[ 243], 50.00th=[ 258], 60.00th=[ 277], 00:10:30.207 | 70.00th=[ 277], 80.00th=[ 281], 90.00th=[ 285], 95.00th=[ 302], 00:10:30.207 | 99.00th=[ 322], 99.50th=[ 334], 99.90th=[ 388], 99.95th=[ 388], 00:10:30.207 | 99.99th=[ 388] 00:10:30.207 bw ( KiB/s): min= 4096, max= 4096, per=30.78%, avg=4096.00, stdev= 0.00, samples=1 00:10:30.207 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:30.207 lat (usec) : 250=24.73%, 500=69.62%, 750=3.73%, 1000=0.11% 00:10:30.207 lat (msec) : 50=1.81% 00:10:30.207 cpu : usr=0.80%, sys=1.70%, ctx=938, majf=0, minf=2 00:10:30.207 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:30.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.207 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.207 issued rwts: total=426,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.207 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:30.207 job1: (groupid=0, jobs=1): err= 0: pid=2587993: Wed Jul 24 21:58:09 2024 00:10:30.207 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:30.207 slat (nsec): min=8793, max=26476, avg=10010.35, stdev=2823.73 00:10:30.207 clat (usec): min=346, max=41188, avg=1378.03, stdev=6151.18 00:10:30.207 lat (usec): min=356, max=41201, avg=1388.04, stdev=6152.79 00:10:30.207 clat percentiles (usec): 00:10:30.207 | 1.00th=[ 355], 5.00th=[ 363], 10.00th=[ 367], 20.00th=[ 371], 00:10:30.207 | 30.00th=[ 379], 40.00th=[ 383], 50.00th=[ 388], 60.00th=[ 396], 00:10:30.207 | 70.00th=[ 412], 80.00th=[ 469], 90.00th=[ 519], 95.00th=[ 545], 00:10:30.207 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:30.207 | 99.99th=[41157] 00:10:30.207 write: IOPS=809, BW=3237KiB/s (3314kB/s)(3240KiB/1001msec); 0 zone resets 00:10:30.207 slat (usec): min=11, max=40627, avg=112.12, stdev=1986.70 00:10:30.207 clat (usec): min=175, max=457, avg=238.99, stdev=39.75 00:10:30.207 lat (usec): min=189, max=40973, avg=351.12, stdev=1993.40 00:10:30.207 clat percentiles (usec): 00:10:30.207 | 1.00th=[ 196], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 208], 00:10:30.207 | 30.00th=[ 215], 40.00th=[ 221], 50.00th=[ 231], 60.00th=[ 239], 00:10:30.207 | 70.00th=[ 247], 80.00th=[ 258], 90.00th=[ 289], 95.00th=[ 326], 00:10:30.207 | 99.00th=[ 379], 99.50th=[ 392], 99.90th=[ 457], 99.95th=[ 457], 00:10:30.207 | 99.99th=[ 457] 00:10:30.207 bw ( KiB/s): min= 4096, max= 4096, per=30.78%, avg=4096.00, stdev= 0.00, samples=1 00:10:30.207 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:30.207 lat (usec) : 250=44.93%, 500=49.70%, 750=4.39% 00:10:30.207 lat (msec) : 10=0.08%, 50=0.91% 00:10:30.207 cpu : usr=1.20%, sys=2.10%, ctx=1327, majf=0, minf=1 00:10:30.207 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:30.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.207 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.207 issued rwts: total=512,810,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.207 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:30.207 job2: (groupid=0, jobs=1): err= 0: pid=2588017: Wed Jul 24 21:58:09 2024 00:10:30.207 read: IOPS=1058, BW=4233KiB/s (4335kB/s)(4288KiB/1013msec) 00:10:30.208 slat (nsec): min=8697, max=30915, avg=9563.21, stdev=1633.23 00:10:30.208 clat (usec): min=248, max=41190, avg=517.40, stdev=2482.37 00:10:30.208 lat (usec): min=257, max=41200, avg=526.96, stdev=2482.39 00:10:30.208 clat percentiles (usec): 00:10:30.208 | 1.00th=[ 265], 5.00th=[ 322], 10.00th=[ 347], 20.00th=[ 355], 00:10:30.208 | 30.00th=[ 359], 40.00th=[ 363], 50.00th=[ 367], 60.00th=[ 371], 00:10:30.208 | 70.00th=[ 375], 80.00th=[ 379], 90.00th=[ 392], 95.00th=[ 400], 00:10:30.208 | 99.00th=[ 537], 99.50th=[ 644], 99.90th=[41157], 99.95th=[41157], 00:10:30.208 | 99.99th=[41157] 00:10:30.208 write: IOPS=1516, BW=6065KiB/s (6211kB/s)(6144KiB/1013msec); 0 zone resets 00:10:30.208 slat (usec): min=9, max=40436, avg=64.93, stdev=1420.02 00:10:30.208 clat (usec): min=168, max=476, avg=221.89, stdev=32.13 00:10:30.208 lat (usec): min=181, max=40877, avg=286.82, stdev=1428.91 00:10:30.208 clat percentiles (usec): 00:10:30.208 | 1.00th=[ 178], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 200], 00:10:30.208 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 223], 00:10:30.208 | 70.00th=[ 231], 80.00th=[ 241], 90.00th=[ 260], 95.00th=[ 285], 00:10:30.208 | 99.00th=[ 326], 99.50th=[ 371], 99.90th=[ 441], 99.95th=[ 478], 00:10:30.208 | 99.99th=[ 478] 00:10:30.208 bw ( KiB/s): min= 4784, max= 7504, per=46.17%, avg=6144.00, stdev=1923.33, samples=2 00:10:30.208 iops : min= 1196, max= 1876, avg=1536.00, stdev=480.83, samples=2 00:10:30.208 lat (usec) : 250=50.92%, 500=48.31%, 750=0.61% 00:10:30.208 lat (msec) : 50=0.15% 00:10:30.208 cpu : usr=1.78%, sys=3.16%, ctx=2611, majf=0, minf=1 00:10:30.208 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:30.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.208 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.208 issued rwts: total=1072,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.208 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:30.208 job3: (groupid=0, jobs=1): err= 0: pid=2588025: Wed Jul 24 21:58:09 2024 00:10:30.208 read: IOPS=20, BW=83.7KiB/s (85.7kB/s)(84.0KiB/1004msec) 00:10:30.208 slat (nsec): min=11817, max=26216, avg=24528.67, stdev=2940.39 00:10:30.208 clat (usec): min=40668, max=41068, avg=40954.66, stdev=80.99 00:10:30.208 lat (usec): min=40680, max=41093, avg=40979.19, stdev=83.37 00:10:30.208 clat percentiles (usec): 00:10:30.208 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:30.208 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:30.208 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:30.208 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:30.208 | 99.99th=[41157] 00:10:30.208 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:10:30.208 slat (usec): min=7, max=12504, avg=36.95, stdev=552.33 00:10:30.208 clat (usec): min=186, max=455, avg=237.87, stdev=28.28 00:10:30.208 lat (usec): min=194, max=12890, avg=274.82, stdev=559.66 00:10:30.208 clat percentiles (usec): 00:10:30.208 | 1.00th=[ 194], 5.00th=[ 200], 10.00th=[ 206], 20.00th=[ 215], 00:10:30.208 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 241], 00:10:30.208 | 70.00th=[ 249], 80.00th=[ 258], 90.00th=[ 273], 95.00th=[ 281], 00:10:30.208 | 99.00th=[ 334], 99.50th=[ 343], 99.90th=[ 457], 99.95th=[ 457], 00:10:30.208 | 99.99th=[ 457] 00:10:30.208 bw ( KiB/s): min= 4096, max= 4096, per=30.78%, avg=4096.00, stdev= 0.00, samples=1 00:10:30.208 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:30.208 lat (usec) : 250=67.73%, 500=28.33% 00:10:30.208 lat (msec) : 50=3.94% 00:10:30.208 cpu : usr=1.00%, sys=0.30%, ctx=536, majf=0, minf=1 00:10:30.208 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:30.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.208 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.208 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.208 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:30.208 00:10:30.208 Run status group 0 (all jobs): 00:10:30.208 READ: bw=8020KiB/s (8212kB/s), 83.7KiB/s-4233KiB/s (85.7kB/s-4335kB/s), io=8124KiB (8319kB), run=1001-1013msec 00:10:30.208 WRITE: bw=13.0MiB/s (13.6MB/s), 2040KiB/s-6065KiB/s (2089kB/s-6211kB/s), io=13.2MiB (13.8MB), run=1001-1013msec 00:10:30.208 00:10:30.208 Disk stats (read/write): 00:10:30.208 nvme0n1: ios=391/512, merge=0/0, ticks=704/115, in_queue=819, util=84.67% 00:10:30.208 nvme0n2: ios=258/512, merge=0/0, ticks=1500/119, in_queue=1619, util=90.67% 00:10:30.208 nvme0n3: ios=997/1024, merge=0/0, ticks=1377/222, in_queue=1599, util=94.96% 00:10:30.208 nvme0n4: ios=70/512, merge=0/0, ticks=1152/114, in_queue=1266, util=95.43% 00:10:30.208 21:58:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:30.208 [global] 00:10:30.208 thread=1 00:10:30.208 invalidate=1 00:10:30.208 rw=randwrite 00:10:30.208 time_based=1 00:10:30.208 runtime=1 00:10:30.208 ioengine=libaio 00:10:30.208 direct=1 00:10:30.208 bs=4096 00:10:30.208 iodepth=1 00:10:30.208 norandommap=0 00:10:30.208 numjobs=1 00:10:30.208 00:10:30.208 verify_dump=1 00:10:30.208 verify_backlog=512 00:10:30.208 verify_state_save=0 00:10:30.208 do_verify=1 00:10:30.208 verify=crc32c-intel 00:10:30.208 [job0] 00:10:30.208 filename=/dev/nvme0n1 00:10:30.208 [job1] 00:10:30.208 filename=/dev/nvme0n2 00:10:30.208 [job2] 00:10:30.208 filename=/dev/nvme0n3 00:10:30.208 [job3] 00:10:30.208 filename=/dev/nvme0n4 00:10:30.208 Could not set queue depth (nvme0n1) 00:10:30.208 Could not set queue depth (nvme0n2) 00:10:30.208 Could not set queue depth (nvme0n3) 00:10:30.208 Could not set queue depth (nvme0n4) 00:10:30.478 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:30.478 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:30.478 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:30.478 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:30.478 fio-3.35 00:10:30.478 Starting 4 threads 00:10:31.876 00:10:31.876 job0: (groupid=0, jobs=1): err= 0: pid=2588418: Wed Jul 24 21:58:10 2024 00:10:31.876 read: IOPS=541, BW=2165KiB/s (2217kB/s)(2252KiB/1040msec) 00:10:31.876 slat (nsec): min=8914, max=42540, avg=11315.97, stdev=3257.70 00:10:31.876 clat (usec): min=266, max=41109, avg=1404.75, stdev=6322.88 00:10:31.876 lat (usec): min=277, max=41134, avg=1416.06, stdev=6325.38 00:10:31.876 clat percentiles (usec): 00:10:31.876 | 1.00th=[ 285], 5.00th=[ 293], 10.00th=[ 302], 20.00th=[ 318], 00:10:31.876 | 30.00th=[ 334], 40.00th=[ 363], 50.00th=[ 388], 60.00th=[ 416], 00:10:31.876 | 70.00th=[ 445], 80.00th=[ 478], 90.00th=[ 529], 95.00th=[ 562], 00:10:31.876 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:31.876 | 99.99th=[41157] 00:10:31.876 write: IOPS=984, BW=3938KiB/s (4033kB/s)(4096KiB/1040msec); 0 zone resets 00:10:31.876 slat (nsec): min=12042, max=40663, avg=14367.75, stdev=2061.84 00:10:31.876 clat (usec): min=154, max=647, avg=215.07, stdev=35.74 00:10:31.876 lat (usec): min=167, max=662, avg=229.44, stdev=35.86 00:10:31.876 clat percentiles (usec): 00:10:31.876 | 1.00th=[ 165], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 190], 00:10:31.876 | 30.00th=[ 196], 40.00th=[ 202], 50.00th=[ 208], 60.00th=[ 217], 00:10:31.876 | 70.00th=[ 229], 80.00th=[ 237], 90.00th=[ 255], 95.00th=[ 269], 00:10:31.876 | 99.00th=[ 314], 99.50th=[ 351], 99.90th=[ 586], 99.95th=[ 652], 00:10:31.876 | 99.99th=[ 652] 00:10:31.876 bw ( KiB/s): min= 8192, max= 8192, per=52.00%, avg=8192.00, stdev= 0.00, samples=1 00:10:31.876 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:31.876 lat (usec) : 250=57.21%, 500=37.37%, 750=4.54% 00:10:31.876 lat (msec) : 50=0.88% 00:10:31.876 cpu : usr=2.41%, sys=2.12%, ctx=1589, majf=0, minf=1 00:10:31.876 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.876 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.876 issued rwts: total=563,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.876 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.876 job1: (groupid=0, jobs=1): err= 0: pid=2588434: Wed Jul 24 21:58:10 2024 00:10:31.876 read: IOPS=621, BW=2486KiB/s (2546kB/s)(2556KiB/1028msec) 00:10:31.876 slat (nsec): min=8204, max=38962, avg=11353.59, stdev=2785.30 00:10:31.876 clat (usec): min=230, max=41125, avg=1139.17, stdev=5514.79 00:10:31.876 lat (usec): min=250, max=41153, avg=1150.53, stdev=5517.03 00:10:31.876 clat percentiles (usec): 00:10:31.876 | 1.00th=[ 253], 5.00th=[ 314], 10.00th=[ 343], 20.00th=[ 359], 00:10:31.876 | 30.00th=[ 363], 40.00th=[ 371], 50.00th=[ 375], 60.00th=[ 379], 00:10:31.876 | 70.00th=[ 388], 80.00th=[ 404], 90.00th=[ 433], 95.00th=[ 461], 00:10:31.876 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:31.876 | 99.99th=[41157] 00:10:31.876 write: IOPS=996, BW=3984KiB/s (4080kB/s)(4096KiB/1028msec); 0 zone resets 00:10:31.876 slat (nsec): min=10974, max=49348, avg=14836.22, stdev=3932.82 00:10:31.876 clat (usec): min=126, max=819, avg=263.08, stdev=60.58 00:10:31.876 lat (usec): min=139, max=832, avg=277.92, stdev=61.18 00:10:31.876 clat percentiles (usec): 00:10:31.876 | 1.00th=[ 149], 5.00th=[ 190], 10.00th=[ 208], 20.00th=[ 223], 00:10:31.876 | 30.00th=[ 231], 40.00th=[ 239], 50.00th=[ 249], 60.00th=[ 265], 00:10:31.876 | 70.00th=[ 285], 80.00th=[ 302], 90.00th=[ 351], 95.00th=[ 371], 00:10:31.876 | 99.00th=[ 412], 99.50th=[ 474], 99.90th=[ 676], 99.95th=[ 824], 00:10:31.876 | 99.99th=[ 824] 00:10:31.876 bw ( KiB/s): min= 8192, max= 8192, per=52.00%, avg=8192.00, stdev= 0.00, samples=1 00:10:31.876 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:31.876 lat (usec) : 250=32.47%, 500=66.21%, 750=0.54%, 1000=0.06% 00:10:31.876 lat (msec) : 50=0.72% 00:10:31.876 cpu : usr=1.56%, sys=2.92%, ctx=1664, majf=0, minf=1 00:10:31.876 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.876 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.876 issued rwts: total=639,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.876 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.876 job2: (groupid=0, jobs=1): err= 0: pid=2588453: Wed Jul 24 21:58:10 2024 00:10:31.876 read: IOPS=332, BW=1329KiB/s (1361kB/s)(1368KiB/1029msec) 00:10:31.876 slat (usec): min=7, max=210, avg=11.16, stdev=11.49 00:10:31.876 clat (usec): min=434, max=42868, avg=2579.99, stdev=8819.00 00:10:31.876 lat (usec): min=444, max=42895, avg=2591.14, stdev=8823.34 00:10:31.876 clat percentiles (usec): 00:10:31.876 | 1.00th=[ 461], 5.00th=[ 498], 10.00th=[ 515], 20.00th=[ 537], 00:10:31.876 | 30.00th=[ 545], 40.00th=[ 553], 50.00th=[ 562], 60.00th=[ 578], 00:10:31.876 | 70.00th=[ 586], 80.00th=[ 611], 90.00th=[ 644], 95.00th=[ 840], 00:10:31.876 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42730], 99.95th=[42730], 00:10:31.876 | 99.99th=[42730] 00:10:31.876 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:10:31.876 slat (nsec): min=10838, max=41301, avg=13042.64, stdev=2420.12 00:10:31.876 clat (usec): min=197, max=482, avg=259.84, stdev=36.02 00:10:31.876 lat (usec): min=210, max=502, avg=272.89, stdev=36.68 00:10:31.876 clat percentiles (usec): 00:10:31.876 | 1.00th=[ 204], 5.00th=[ 217], 10.00th=[ 223], 20.00th=[ 233], 00:10:31.876 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 265], 00:10:31.876 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 302], 95.00th=[ 322], 00:10:31.876 | 99.00th=[ 388], 99.50th=[ 420], 99.90th=[ 482], 99.95th=[ 482], 00:10:31.876 | 99.99th=[ 482] 00:10:31.876 bw ( KiB/s): min= 4096, max= 4096, per=26.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:31.876 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:31.876 lat (usec) : 250=27.63%, 500=34.66%, 750=35.60%, 1000=0.12% 00:10:31.876 lat (msec) : 50=1.99% 00:10:31.876 cpu : usr=0.58%, sys=0.97%, ctx=854, majf=0, minf=1 00:10:31.876 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.876 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.876 issued rwts: total=342,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.876 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.876 job3: (groupid=0, jobs=1): err= 0: pid=2588454: Wed Jul 24 21:58:10 2024 00:10:31.876 read: IOPS=1324, BW=5299KiB/s (5426kB/s)(5304KiB/1001msec) 00:10:31.876 slat (nsec): min=8881, max=27347, avg=9519.50, stdev=1012.58 00:10:31.876 clat (usec): min=270, max=1000, avg=468.60, stdev=70.16 00:10:31.876 lat (usec): min=279, max=1010, avg=478.12, stdev=70.15 00:10:31.876 clat percentiles (usec): 00:10:31.876 | 1.00th=[ 302], 5.00th=[ 351], 10.00th=[ 412], 20.00th=[ 429], 00:10:31.876 | 30.00th=[ 437], 40.00th=[ 445], 50.00th=[ 461], 60.00th=[ 474], 00:10:31.876 | 70.00th=[ 490], 80.00th=[ 515], 90.00th=[ 570], 95.00th=[ 603], 00:10:31.876 | 99.00th=[ 644], 99.50th=[ 660], 99.90th=[ 857], 99.95th=[ 1004], 00:10:31.876 | 99.99th=[ 1004] 00:10:31.876 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:31.876 slat (nsec): min=7620, max=48170, avg=12552.09, stdev=2284.23 00:10:31.876 clat (usec): min=140, max=945, avg=220.35, stdev=47.15 00:10:31.876 lat (usec): min=152, max=958, avg=232.90, stdev=47.41 00:10:31.876 clat percentiles (usec): 00:10:31.876 | 1.00th=[ 151], 5.00th=[ 165], 10.00th=[ 172], 20.00th=[ 180], 00:10:31.876 | 30.00th=[ 194], 40.00th=[ 206], 50.00th=[ 217], 60.00th=[ 225], 00:10:31.876 | 70.00th=[ 237], 80.00th=[ 249], 90.00th=[ 281], 95.00th=[ 293], 00:10:31.876 | 99.00th=[ 343], 99.50th=[ 388], 99.90th=[ 603], 99.95th=[ 947], 00:10:31.876 | 99.99th=[ 947] 00:10:31.876 bw ( KiB/s): min= 7072, max= 7072, per=44.89%, avg=7072.00, stdev= 0.00, samples=1 00:10:31.876 iops : min= 1768, max= 1768, avg=1768.00, stdev= 0.00, samples=1 00:10:31.876 lat (usec) : 250=43.01%, 500=45.04%, 750=11.81%, 1000=0.10% 00:10:31.876 lat (msec) : 2=0.03% 00:10:31.876 cpu : usr=2.50%, sys=2.80%, ctx=2862, majf=0, minf=2 00:10:31.876 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.876 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.876 issued rwts: total=1326,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.876 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.876 00:10:31.876 Run status group 0 (all jobs): 00:10:31.877 READ: bw=10.8MiB/s (11.3MB/s), 1329KiB/s-5299KiB/s (1361kB/s-5426kB/s), io=11.2MiB (11.8MB), run=1001-1040msec 00:10:31.877 WRITE: bw=15.4MiB/s (16.1MB/s), 1990KiB/s-6138KiB/s (2038kB/s-6285kB/s), io=16.0MiB (16.8MB), run=1001-1040msec 00:10:31.877 00:10:31.877 Disk stats (read/write): 00:10:31.877 nvme0n1: ios=584/1024, merge=0/0, ticks=1227/213, in_queue=1440, util=97.39% 00:10:31.877 nvme0n2: ios=683/1024, merge=0/0, ticks=1084/264, in_queue=1348, util=92.94% 00:10:31.877 nvme0n3: ios=393/512, merge=0/0, ticks=723/128, in_queue=851, util=90.31% 00:10:31.877 nvme0n4: ios=1081/1343, merge=0/0, ticks=568/297, in_queue=865, util=92.76% 00:10:31.877 21:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:31.877 [global] 00:10:31.877 thread=1 00:10:31.877 invalidate=1 00:10:31.877 rw=write 00:10:31.877 time_based=1 00:10:31.877 runtime=1 00:10:31.877 ioengine=libaio 00:10:31.877 direct=1 00:10:31.877 bs=4096 00:10:31.877 iodepth=128 00:10:31.877 norandommap=0 00:10:31.877 numjobs=1 00:10:31.877 00:10:31.877 verify_dump=1 00:10:31.877 verify_backlog=512 00:10:31.877 verify_state_save=0 00:10:31.877 do_verify=1 00:10:31.877 verify=crc32c-intel 00:10:31.877 [job0] 00:10:31.877 filename=/dev/nvme0n1 00:10:31.877 [job1] 00:10:31.877 filename=/dev/nvme0n2 00:10:31.877 [job2] 00:10:31.877 filename=/dev/nvme0n3 00:10:31.877 [job3] 00:10:31.877 filename=/dev/nvme0n4 00:10:31.877 Could not set queue depth (nvme0n1) 00:10:31.877 Could not set queue depth (nvme0n2) 00:10:31.877 Could not set queue depth (nvme0n3) 00:10:31.877 Could not set queue depth (nvme0n4) 00:10:32.146 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:32.146 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:32.146 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:32.146 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:32.146 fio-3.35 00:10:32.146 Starting 4 threads 00:10:33.547 00:10:33.548 job0: (groupid=0, jobs=1): err= 0: pid=2588871: Wed Jul 24 21:58:12 2024 00:10:33.548 read: IOPS=4989, BW=19.5MiB/s (20.4MB/s)(19.7MiB/1009msec) 00:10:33.548 slat (nsec): min=1721, max=36243k, avg=97929.96, stdev=898096.21 00:10:33.548 clat (usec): min=4688, max=83558, avg=15523.46, stdev=12277.69 00:10:33.548 lat (usec): min=4692, max=83584, avg=15621.39, stdev=12336.58 00:10:33.548 clat percentiles (usec): 00:10:33.548 | 1.00th=[ 5211], 5.00th=[ 7832], 10.00th=[ 8586], 20.00th=[ 9241], 00:10:33.548 | 30.00th=[ 9896], 40.00th=[10552], 50.00th=[11338], 60.00th=[12518], 00:10:33.548 | 70.00th=[14484], 80.00th=[17695], 90.00th=[25822], 95.00th=[40633], 00:10:33.548 | 99.00th=[68682], 99.50th=[80217], 99.90th=[80217], 99.95th=[80217], 00:10:33.548 | 99.99th=[83362] 00:10:33.548 write: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec); 0 zone resets 00:10:33.548 slat (usec): min=2, max=21962, avg=72.94, stdev=648.80 00:10:33.548 clat (usec): min=1421, max=62752, avg=9758.54, stdev=4959.86 00:10:33.548 lat (usec): min=1448, max=63299, avg=9831.48, stdev=5022.35 00:10:33.548 clat percentiles (usec): 00:10:33.548 | 1.00th=[ 3359], 5.00th=[ 5276], 10.00th=[ 5997], 20.00th=[ 6915], 00:10:33.548 | 30.00th=[ 7635], 40.00th=[ 8356], 50.00th=[ 8848], 60.00th=[ 9503], 00:10:33.548 | 70.00th=[10290], 80.00th=[11469], 90.00th=[13829], 95.00th=[16188], 00:10:33.548 | 99.00th=[34866], 99.50th=[35914], 99.90th=[51643], 99.95th=[51643], 00:10:33.548 | 99.99th=[62653] 00:10:33.548 bw ( KiB/s): min=16432, max=24528, per=29.85%, avg=20480.00, stdev=5724.74, samples=2 00:10:33.548 iops : min= 4108, max= 6132, avg=5120.00, stdev=1431.18, samples=2 00:10:33.548 lat (msec) : 2=0.08%, 4=1.05%, 10=48.71%, 20=41.67%, 50=6.39% 00:10:33.548 lat (msec) : 100=2.10% 00:10:33.548 cpu : usr=4.96%, sys=9.33%, ctx=275, majf=0, minf=1 00:10:33.548 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:33.548 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.548 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:33.548 issued rwts: total=5034,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.548 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:33.548 job1: (groupid=0, jobs=1): err= 0: pid=2588879: Wed Jul 24 21:58:12 2024 00:10:33.548 read: IOPS=3148, BW=12.3MiB/s (12.9MB/s)(12.4MiB/1011msec) 00:10:33.548 slat (nsec): min=1867, max=18803k, avg=134301.64, stdev=1027514.97 00:10:33.548 clat (usec): min=1465, max=66444, avg=18091.09, stdev=9149.41 00:10:33.548 lat (usec): min=1488, max=66455, avg=18225.39, stdev=9236.38 00:10:33.548 clat percentiles (usec): 00:10:33.548 | 1.00th=[ 1942], 5.00th=[ 3490], 10.00th=[ 8029], 20.00th=[12518], 00:10:33.548 | 30.00th=[13960], 40.00th=[14746], 50.00th=[16581], 60.00th=[18744], 00:10:33.548 | 70.00th=[20055], 80.00th=[24773], 90.00th=[28705], 95.00th=[34341], 00:10:33.548 | 99.00th=[44827], 99.50th=[53740], 99.90th=[66323], 99.95th=[66323], 00:10:33.548 | 99.99th=[66323] 00:10:33.548 write: IOPS=3545, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1011msec); 0 zone resets 00:10:33.548 slat (usec): min=2, max=10926, avg=137.72, stdev=788.86 00:10:33.548 clat (usec): min=433, max=78720, avg=19705.30, stdev=18070.48 00:10:33.548 lat (usec): min=568, max=78732, avg=19843.02, stdev=18172.10 00:10:33.548 clat percentiles (usec): 00:10:33.548 | 1.00th=[ 1713], 5.00th=[ 2966], 10.00th=[ 5932], 20.00th=[ 8586], 00:10:33.548 | 30.00th=[10159], 40.00th=[11338], 50.00th=[12911], 60.00th=[14877], 00:10:33.548 | 70.00th=[18220], 80.00th=[24511], 90.00th=[50070], 95.00th=[69731], 00:10:33.548 | 99.00th=[73925], 99.50th=[76022], 99.90th=[79168], 99.95th=[79168], 00:10:33.548 | 99.99th=[79168] 00:10:33.548 bw ( KiB/s): min=12152, max=16384, per=20.80%, avg=14268.00, stdev=2992.48, samples=2 00:10:33.548 iops : min= 3038, max= 4096, avg=3567.00, stdev=748.12, samples=2 00:10:33.548 lat (usec) : 500=0.01%, 1000=0.06% 00:10:33.548 lat (msec) : 2=1.79%, 4=4.70%, 10=13.51%, 20=53.97%, 50=20.35% 00:10:33.548 lat (msec) : 100=5.62% 00:10:33.548 cpu : usr=2.48%, sys=5.84%, ctx=345, majf=0, minf=1 00:10:33.548 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:33.548 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.548 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:33.548 issued rwts: total=3183,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.548 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:33.548 job2: (groupid=0, jobs=1): err= 0: pid=2588881: Wed Jul 24 21:58:12 2024 00:10:33.548 read: IOPS=3079, BW=12.0MiB/s (12.6MB/s)(12.2MiB/1015msec) 00:10:33.548 slat (nsec): min=1863, max=19917k, avg=131062.68, stdev=917379.40 00:10:33.548 clat (usec): min=4625, max=44894, avg=16584.96, stdev=6550.22 00:10:33.548 lat (usec): min=4636, max=46540, avg=16716.03, stdev=6597.47 00:10:33.548 clat percentiles (usec): 00:10:33.548 | 1.00th=[ 5538], 5.00th=[ 8291], 10.00th=[10159], 20.00th=[11863], 00:10:33.548 | 30.00th=[12780], 40.00th=[13960], 50.00th=[14877], 60.00th=[16319], 00:10:33.548 | 70.00th=[18482], 80.00th=[21103], 90.00th=[27132], 95.00th=[29230], 00:10:33.548 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[44827], 00:10:33.548 | 99.99th=[44827] 00:10:33.548 write: IOPS=3531, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1015msec); 0 zone resets 00:10:33.548 slat (usec): min=2, max=44123, avg=155.17, stdev=1125.93 00:10:33.548 clat (usec): min=1600, max=78574, avg=19890.69, stdev=16587.97 00:10:33.548 lat (usec): min=1619, max=78589, avg=20045.86, stdev=16704.38 00:10:33.548 clat percentiles (usec): 00:10:33.548 | 1.00th=[ 2737], 5.00th=[ 6783], 10.00th=[ 8586], 20.00th=[10814], 00:10:33.548 | 30.00th=[11994], 40.00th=[13304], 50.00th=[14746], 60.00th=[15533], 00:10:33.548 | 70.00th=[17957], 80.00th=[22414], 90.00th=[43254], 95.00th=[69731], 00:10:33.548 | 99.00th=[74974], 99.50th=[77071], 99.90th=[78119], 99.95th=[78119], 00:10:33.548 | 99.99th=[78119] 00:10:33.548 bw ( KiB/s): min=12208, max=15872, per=20.47%, avg=14040.00, stdev=2590.84, samples=2 00:10:33.548 iops : min= 3052, max= 3968, avg=3510.00, stdev=647.71, samples=2 00:10:33.548 lat (msec) : 2=0.13%, 4=0.55%, 10=11.91%, 20=63.41%, 50=19.12% 00:10:33.548 lat (msec) : 100=4.87% 00:10:33.548 cpu : usr=3.25%, sys=5.72%, ctx=306, majf=0, minf=1 00:10:33.548 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:33.548 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.548 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:33.548 issued rwts: total=3126,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.548 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:33.548 job3: (groupid=0, jobs=1): err= 0: pid=2588882: Wed Jul 24 21:58:12 2024 00:10:33.548 read: IOPS=5017, BW=19.6MiB/s (20.6MB/s)(19.7MiB/1006msec) 00:10:33.548 slat (nsec): min=1820, max=14202k, avg=99921.40, stdev=730778.49 00:10:33.548 clat (usec): min=4303, max=43877, avg=13354.38, stdev=5212.00 00:10:33.548 lat (usec): min=4479, max=43881, avg=13454.30, stdev=5252.80 00:10:33.548 clat percentiles (usec): 00:10:33.548 | 1.00th=[ 5473], 5.00th=[ 8094], 10.00th=[ 8848], 20.00th=[ 9634], 00:10:33.548 | 30.00th=[10159], 40.00th=[11207], 50.00th=[11731], 60.00th=[12518], 00:10:33.548 | 70.00th=[13960], 80.00th=[16909], 90.00th=[20841], 95.00th=[23987], 00:10:33.548 | 99.00th=[31851], 99.50th=[32113], 99.90th=[35390], 99.95th=[35390], 00:10:33.548 | 99.99th=[43779] 00:10:33.548 write: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec); 0 zone resets 00:10:33.548 slat (usec): min=2, max=16894, avg=89.09, stdev=617.10 00:10:33.548 clat (usec): min=822, max=45274, avg=11747.79, stdev=5558.09 00:10:33.548 lat (usec): min=1506, max=50960, avg=11836.88, stdev=5601.01 00:10:33.548 clat percentiles (usec): 00:10:33.548 | 1.00th=[ 4817], 5.00th=[ 6652], 10.00th=[ 7242], 20.00th=[ 7898], 00:10:33.548 | 30.00th=[ 8717], 40.00th=[ 9503], 50.00th=[10683], 60.00th=[11600], 00:10:33.548 | 70.00th=[12649], 80.00th=[13960], 90.00th=[16909], 95.00th=[22676], 00:10:33.548 | 99.00th=[35390], 99.50th=[37487], 99.90th=[45351], 99.95th=[45351], 00:10:33.548 | 99.99th=[45351] 00:10:33.548 bw ( KiB/s): min=16968, max=23992, per=29.85%, avg=20480.00, stdev=4966.72, samples=2 00:10:33.548 iops : min= 4242, max= 5998, avg=5120.00, stdev=1241.68, samples=2 00:10:33.548 lat (usec) : 1000=0.01% 00:10:33.548 lat (msec) : 2=0.02%, 4=0.24%, 10=36.16%, 20=54.95%, 50=8.63% 00:10:33.548 cpu : usr=3.88%, sys=7.16%, ctx=391, majf=0, minf=1 00:10:33.548 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:33.548 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.548 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:33.548 issued rwts: total=5048,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.548 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:33.548 00:10:33.548 Run status group 0 (all jobs): 00:10:33.548 READ: bw=63.1MiB/s (66.1MB/s), 12.0MiB/s-19.6MiB/s (12.6MB/s-20.6MB/s), io=64.0MiB (67.1MB), run=1006-1015msec 00:10:33.548 WRITE: bw=67.0MiB/s (70.2MB/s), 13.8MiB/s-19.9MiB/s (14.5MB/s-20.8MB/s), io=68.0MiB (71.3MB), run=1006-1015msec 00:10:33.548 00:10:33.548 Disk stats (read/write): 00:10:33.548 nvme0n1: ios=3617/4096, merge=0/0, ticks=44356/37321, in_queue=81677, util=86.47% 00:10:33.548 nvme0n2: ios=3123/3239, merge=0/0, ticks=50363/41258, in_queue=91621, util=97.23% 00:10:33.548 nvme0n3: ios=3092/3183, merge=0/0, ticks=32834/30166, in_queue=63000, util=99.04% 00:10:33.548 nvme0n4: ios=3967/4096, merge=0/0, ticks=28141/25614, in_queue=53755, util=97.08% 00:10:33.548 21:58:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:33.548 [global] 00:10:33.548 thread=1 00:10:33.548 invalidate=1 00:10:33.548 rw=randwrite 00:10:33.548 time_based=1 00:10:33.548 runtime=1 00:10:33.548 ioengine=libaio 00:10:33.548 direct=1 00:10:33.548 bs=4096 00:10:33.548 iodepth=128 00:10:33.548 norandommap=0 00:10:33.548 numjobs=1 00:10:33.548 00:10:33.548 verify_dump=1 00:10:33.548 verify_backlog=512 00:10:33.548 verify_state_save=0 00:10:33.548 do_verify=1 00:10:33.548 verify=crc32c-intel 00:10:33.548 [job0] 00:10:33.548 filename=/dev/nvme0n1 00:10:33.548 [job1] 00:10:33.548 filename=/dev/nvme0n2 00:10:33.548 [job2] 00:10:33.548 filename=/dev/nvme0n3 00:10:33.548 [job3] 00:10:33.548 filename=/dev/nvme0n4 00:10:33.548 Could not set queue depth (nvme0n1) 00:10:33.549 Could not set queue depth (nvme0n2) 00:10:33.549 Could not set queue depth (nvme0n3) 00:10:33.549 Could not set queue depth (nvme0n4) 00:10:33.807 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:33.807 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:33.807 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:33.807 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:33.807 fio-3.35 00:10:33.807 Starting 4 threads 00:10:35.218 00:10:35.218 job0: (groupid=0, jobs=1): err= 0: pid=2589303: Wed Jul 24 21:58:14 2024 00:10:35.218 read: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec) 00:10:35.218 slat (nsec): min=1967, max=10995k, avg=114336.15, stdev=738726.28 00:10:35.218 clat (usec): min=6273, max=38107, avg=13520.79, stdev=4427.76 00:10:35.218 lat (usec): min=6282, max=38111, avg=13635.12, stdev=4485.75 00:10:35.218 clat percentiles (usec): 00:10:35.218 | 1.00th=[ 7177], 5.00th=[ 9765], 10.00th=[10290], 20.00th=[10945], 00:10:35.218 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11994], 60.00th=[12387], 00:10:35.218 | 70.00th=[13304], 80.00th=[15139], 90.00th=[19792], 95.00th=[24511], 00:10:35.218 | 99.00th=[29754], 99.50th=[34341], 99.90th=[38011], 99.95th=[38011], 00:10:35.218 | 99.99th=[38011] 00:10:35.218 write: IOPS=4164, BW=16.3MiB/s (17.1MB/s)(16.4MiB/1010msec); 0 zone resets 00:10:35.218 slat (usec): min=2, max=8058, avg=118.16, stdev=528.49 00:10:35.218 clat (usec): min=1828, max=38106, avg=17284.05, stdev=6687.24 00:10:35.218 lat (usec): min=1844, max=38110, avg=17402.21, stdev=6726.95 00:10:35.218 clat percentiles (usec): 00:10:35.218 | 1.00th=[ 5604], 5.00th=[ 6718], 10.00th=[ 8848], 20.00th=[10552], 00:10:35.218 | 30.00th=[11863], 40.00th=[15664], 50.00th=[19268], 60.00th=[19530], 00:10:35.218 | 70.00th=[19792], 80.00th=[22152], 90.00th=[26870], 95.00th=[28967], 00:10:35.218 | 99.00th=[29230], 99.50th=[29230], 99.90th=[29492], 99.95th=[29754], 00:10:35.218 | 99.99th=[38011] 00:10:35.218 bw ( KiB/s): min=14416, max=18352, per=22.99%, avg=16384.00, stdev=2783.17, samples=2 00:10:35.218 iops : min= 3604, max= 4588, avg=4096.00, stdev=695.79, samples=2 00:10:35.218 lat (msec) : 2=0.04%, 10=11.31%, 20=70.28%, 50=18.37% 00:10:35.218 cpu : usr=5.45%, sys=5.25%, ctx=495, majf=0, minf=1 00:10:35.218 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:35.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:35.218 issued rwts: total=4096,4206,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.218 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:35.218 job1: (groupid=0, jobs=1): err= 0: pid=2589304: Wed Jul 24 21:58:14 2024 00:10:35.218 read: IOPS=2356, BW=9426KiB/s (9653kB/s)(9860KiB/1046msec) 00:10:35.218 slat (usec): min=7, max=23965, avg=211.29, stdev=1451.63 00:10:35.218 clat (usec): min=12757, max=96828, avg=30984.99, stdev=19377.61 00:10:35.218 lat (usec): min=15660, max=96844, avg=31196.27, stdev=19419.91 00:10:35.218 clat percentiles (usec): 00:10:35.218 | 1.00th=[15270], 5.00th=[16909], 10.00th=[19006], 20.00th=[19268], 00:10:35.218 | 30.00th=[19530], 40.00th=[19530], 50.00th=[19792], 60.00th=[20055], 00:10:35.218 | 70.00th=[34866], 80.00th=[51119], 90.00th=[54789], 95.00th=[76022], 00:10:35.218 | 99.00th=[95945], 99.50th=[96994], 99.90th=[96994], 99.95th=[96994], 00:10:35.218 | 99.99th=[96994] 00:10:35.218 write: IOPS=2447, BW=9790KiB/s (10.0MB/s)(10.0MiB/1046msec); 0 zone resets 00:10:35.218 slat (usec): min=9, max=26395, avg=176.28, stdev=1163.32 00:10:35.218 clat (usec): min=10841, max=58938, avg=21245.38, stdev=11209.61 00:10:35.218 lat (usec): min=14060, max=58954, avg=21421.65, stdev=11273.03 00:10:35.218 clat percentiles (usec): 00:10:35.218 | 1.00th=[11469], 5.00th=[14091], 10.00th=[14222], 20.00th=[14484], 00:10:35.218 | 30.00th=[14484], 40.00th=[14615], 50.00th=[14746], 60.00th=[16319], 00:10:35.218 | 70.00th=[18482], 80.00th=[31327], 90.00th=[39584], 95.00th=[46924], 00:10:35.218 | 99.00th=[52691], 99.50th=[52691], 99.90th=[58983], 99.95th=[58983], 00:10:35.218 | 99.99th=[58983] 00:10:35.218 bw ( KiB/s): min= 9224, max=11256, per=14.37%, avg=10240.00, stdev=1436.84, samples=2 00:10:35.218 iops : min= 2306, max= 2814, avg=2560.00, stdev=359.21, samples=2 00:10:35.218 lat (msec) : 20=64.22%, 50=24.36%, 100=11.42% 00:10:35.218 cpu : usr=3.54%, sys=5.45%, ctx=164, majf=0, minf=1 00:10:35.218 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:10:35.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:35.218 issued rwts: total=2465,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.218 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:35.218 job2: (groupid=0, jobs=1): err= 0: pid=2589305: Wed Jul 24 21:58:14 2024 00:10:35.218 read: IOPS=5570, BW=21.8MiB/s (22.8MB/s)(22.0MiB/1011msec) 00:10:35.218 slat (usec): min=2, max=9774, avg=92.52, stdev=679.26 00:10:35.218 clat (usec): min=2875, max=22405, avg=11684.62, stdev=2640.69 00:10:35.218 lat (usec): min=2885, max=26740, avg=11777.14, stdev=2682.57 00:10:35.218 clat percentiles (usec): 00:10:35.218 | 1.00th=[ 6063], 5.00th=[ 8094], 10.00th=[ 8848], 20.00th=[ 9896], 00:10:35.218 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11207], 60.00th=[11469], 00:10:35.218 | 70.00th=[11863], 80.00th=[13435], 90.00th=[15401], 95.00th=[17171], 00:10:35.218 | 99.00th=[19530], 99.50th=[20579], 99.90th=[21627], 99.95th=[22152], 00:10:35.218 | 99.99th=[22414] 00:10:35.218 write: IOPS=5940, BW=23.2MiB/s (24.3MB/s)(23.5MiB/1011msec); 0 zone resets 00:10:35.218 slat (usec): min=3, max=9195, avg=72.21, stdev=516.42 00:10:35.218 clat (usec): min=1735, max=22396, avg=10388.09, stdev=2764.19 00:10:35.218 lat (usec): min=1753, max=22400, avg=10460.31, stdev=2778.08 00:10:35.218 clat percentiles (usec): 00:10:35.218 | 1.00th=[ 2802], 5.00th=[ 5473], 10.00th=[ 6783], 20.00th=[ 8160], 00:10:35.218 | 30.00th=[ 9896], 40.00th=[10552], 50.00th=[10814], 60.00th=[11207], 00:10:35.218 | 70.00th=[11469], 80.00th=[11600], 90.00th=[14091], 95.00th=[14746], 00:10:35.218 | 99.00th=[17433], 99.50th=[19268], 99.90th=[21103], 99.95th=[21103], 00:10:35.218 | 99.99th=[22414] 00:10:35.218 bw ( KiB/s): min=22456, max=24576, per=33.00%, avg=23516.00, stdev=1499.07, samples=2 00:10:35.218 iops : min= 5614, max= 6144, avg=5879.00, stdev=374.77, samples=2 00:10:35.218 lat (msec) : 2=0.09%, 4=1.14%, 10=24.88%, 20=73.37%, 50=0.52% 00:10:35.218 cpu : usr=6.04%, sys=7.13%, ctx=494, majf=0, minf=1 00:10:35.218 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:35.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:35.218 issued rwts: total=5632,6006,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.218 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:35.218 job3: (groupid=0, jobs=1): err= 0: pid=2589306: Wed Jul 24 21:58:14 2024 00:10:35.218 read: IOPS=5587, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1008msec) 00:10:35.218 slat (nsec): min=1740, max=9871.5k, avg=91957.73, stdev=618705.21 00:10:35.218 clat (usec): min=5569, max=20894, avg=11923.71, stdev=2200.13 00:10:35.218 lat (usec): min=5577, max=20903, avg=12015.67, stdev=2242.85 00:10:35.218 clat percentiles (usec): 00:10:35.218 | 1.00th=[ 7898], 5.00th=[ 9503], 10.00th=[10028], 20.00th=[10290], 00:10:35.218 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11469], 60.00th=[11863], 00:10:35.218 | 70.00th=[12256], 80.00th=[12911], 90.00th=[14877], 95.00th=[17171], 00:10:35.218 | 99.00th=[19530], 99.50th=[20055], 99.90th=[20579], 99.95th=[20579], 00:10:35.218 | 99.99th=[20841] 00:10:35.218 write: IOPS=5815, BW=22.7MiB/s (23.8MB/s)(22.9MiB/1008msec); 0 zone resets 00:10:35.218 slat (usec): min=2, max=7927, avg=74.64, stdev=402.41 00:10:35.218 clat (usec): min=1946, max=20588, avg=10300.05, stdev=2300.71 00:10:35.218 lat (usec): min=3257, max=20592, avg=10374.69, stdev=2311.79 00:10:35.218 clat percentiles (usec): 00:10:35.218 | 1.00th=[ 4080], 5.00th=[ 5342], 10.00th=[ 6718], 20.00th=[ 8586], 00:10:35.218 | 30.00th=[ 9503], 40.00th=[10552], 50.00th=[11076], 60.00th=[11207], 00:10:35.218 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11994], 95.00th=[13960], 00:10:35.218 | 99.00th=[15270], 99.50th=[16319], 99.90th=[20317], 99.95th=[20317], 00:10:35.218 | 99.99th=[20579] 00:10:35.218 bw ( KiB/s): min=21320, max=24560, per=32.19%, avg=22940.00, stdev=2291.03, samples=2 00:10:35.218 iops : min= 5330, max= 6140, avg=5735.00, stdev=572.76, samples=2 00:10:35.218 lat (msec) : 2=0.01%, 4=0.35%, 10=20.98%, 20=78.32%, 50=0.34% 00:10:35.218 cpu : usr=4.77%, sys=8.34%, ctx=572, majf=0, minf=1 00:10:35.219 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:35.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.219 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:35.219 issued rwts: total=5632,5862,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.219 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:35.219 00:10:35.219 Run status group 0 (all jobs): 00:10:35.219 READ: bw=66.6MiB/s (69.8MB/s), 9426KiB/s-21.8MiB/s (9653kB/s-22.9MB/s), io=69.6MiB (73.0MB), run=1008-1046msec 00:10:35.219 WRITE: bw=69.6MiB/s (73.0MB/s), 9790KiB/s-23.2MiB/s (10.0MB/s-24.3MB/s), io=72.8MiB (76.3MB), run=1008-1046msec 00:10:35.219 00:10:35.219 Disk stats (read/write): 00:10:35.219 nvme0n1: ios=3241/3584, merge=0/0, ticks=41966/59569, in_queue=101535, util=84.98% 00:10:35.219 nvme0n2: ios=2053/2080, merge=0/0, ticks=14337/11192, in_queue=25529, util=85.22% 00:10:35.219 nvme0n3: ios=4664/4872, merge=0/0, ticks=52242/48864, in_queue=101106, util=90.49% 00:10:35.219 nvme0n4: ios=4658/4717, merge=0/0, ticks=45151/36210, in_queue=81361, util=92.75% 00:10:35.219 21:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:35.219 21:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2589423 00:10:35.219 21:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:35.219 21:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:35.219 [global] 00:10:35.219 thread=1 00:10:35.219 invalidate=1 00:10:35.219 rw=read 00:10:35.219 time_based=1 00:10:35.219 runtime=10 00:10:35.219 ioengine=libaio 00:10:35.219 direct=1 00:10:35.219 bs=4096 00:10:35.219 iodepth=1 00:10:35.219 norandommap=1 00:10:35.219 numjobs=1 00:10:35.219 00:10:35.219 [job0] 00:10:35.219 filename=/dev/nvme0n1 00:10:35.219 [job1] 00:10:35.219 filename=/dev/nvme0n2 00:10:35.219 [job2] 00:10:35.219 filename=/dev/nvme0n3 00:10:35.219 [job3] 00:10:35.219 filename=/dev/nvme0n4 00:10:35.219 Could not set queue depth (nvme0n1) 00:10:35.219 Could not set queue depth (nvme0n2) 00:10:35.219 Could not set queue depth (nvme0n3) 00:10:35.219 Could not set queue depth (nvme0n4) 00:10:35.479 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:35.479 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:35.479 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:35.479 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:35.479 fio-3.35 00:10:35.479 Starting 4 threads 00:10:38.005 21:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:38.262 21:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:38.262 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=12509184, buflen=4096 00:10:38.262 fio: pid=2589728, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:38.262 21:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:38.262 21:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:38.262 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=864256, buflen=4096 00:10:38.262 fio: pid=2589727, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:38.520 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=30695424, buflen=4096 00:10:38.520 fio: pid=2589724, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:38.520 21:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:38.520 21:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:38.779 21:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:38.779 21:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:38.779 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=327680, buflen=4096 00:10:38.779 fio: pid=2589726, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:38.779 00:10:38.779 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2589724: Wed Jul 24 21:58:17 2024 00:10:38.779 read: IOPS=2501, BW=9.77MiB/s (10.2MB/s)(29.3MiB/2996msec) 00:10:38.779 slat (usec): min=8, max=14664, avg=13.29, stdev=214.79 00:10:38.779 clat (usec): min=209, max=20782, avg=381.95, stdev=244.90 00:10:38.779 lat (usec): min=219, max=20792, avg=395.24, stdev=327.08 00:10:38.779 clat percentiles (usec): 00:10:38.779 | 1.00th=[ 255], 5.00th=[ 297], 10.00th=[ 310], 20.00th=[ 322], 00:10:38.779 | 30.00th=[ 347], 40.00th=[ 359], 50.00th=[ 363], 60.00th=[ 371], 00:10:38.779 | 70.00th=[ 383], 80.00th=[ 453], 90.00th=[ 486], 95.00th=[ 502], 00:10:38.779 | 99.00th=[ 523], 99.50th=[ 529], 99.90th=[ 619], 99.95th=[ 693], 00:10:38.779 | 99.99th=[20841] 00:10:38.779 bw ( KiB/s): min= 8112, max=12272, per=75.33%, avg=10108.80, stdev=1797.07, samples=5 00:10:38.779 iops : min= 2028, max= 3068, avg=2527.20, stdev=449.27, samples=5 00:10:38.779 lat (usec) : 250=0.77%, 500=93.56%, 750=5.62%, 1000=0.03% 00:10:38.779 lat (msec) : 50=0.01% 00:10:38.779 cpu : usr=1.47%, sys=4.07%, ctx=7498, majf=0, minf=1 00:10:38.779 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:38.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.779 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.779 issued rwts: total=7495,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:38.779 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:38.779 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2589726: Wed Jul 24 21:58:17 2024 00:10:38.779 read: IOPS=25, BW=99.0KiB/s (101kB/s)(320KiB/3231msec) 00:10:38.779 slat (usec): min=9, max=1647, avg=48.46, stdev=181.66 00:10:38.779 clat (usec): min=717, max=42040, avg=40065.47, stdev=6217.35 00:10:38.779 lat (usec): min=963, max=42957, avg=40114.22, stdev=6206.35 00:10:38.779 clat percentiles (usec): 00:10:38.779 | 1.00th=[ 717], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:38.779 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:38.779 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:10:38.779 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:38.779 | 99.99th=[42206] 00:10:38.779 bw ( KiB/s): min= 96, max= 104, per=0.73%, avg=98.83, stdev= 4.02, samples=6 00:10:38.779 iops : min= 24, max= 26, avg=24.67, stdev= 1.03, samples=6 00:10:38.779 lat (usec) : 750=1.23% 00:10:38.779 lat (msec) : 4=1.23%, 50=96.30% 00:10:38.779 cpu : usr=0.15%, sys=0.00%, ctx=84, majf=0, minf=1 00:10:38.779 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:38.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.779 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.779 issued rwts: total=81,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:38.779 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:38.779 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2589727: Wed Jul 24 21:58:17 2024 00:10:38.779 read: IOPS=74, BW=298KiB/s (305kB/s)(844KiB/2829msec) 00:10:38.779 slat (nsec): min=8777, max=38233, avg=14871.72, stdev=7503.06 00:10:38.779 clat (usec): min=349, max=42034, avg=13289.24, stdev=18966.04 00:10:38.779 lat (usec): min=359, max=42057, avg=13304.06, stdev=18973.11 00:10:38.779 clat percentiles (usec): 00:10:38.779 | 1.00th=[ 359], 5.00th=[ 363], 10.00th=[ 367], 20.00th=[ 375], 00:10:38.779 | 30.00th=[ 379], 40.00th=[ 383], 50.00th=[ 388], 60.00th=[ 396], 00:10:38.779 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:38.779 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:38.779 | 99.99th=[42206] 00:10:38.779 bw ( KiB/s): min= 96, max= 1232, per=2.41%, avg=324.80, stdev=507.15, samples=5 00:10:38.779 iops : min= 24, max= 308, avg=81.20, stdev=126.79, samples=5 00:10:38.779 lat (usec) : 500=66.98%, 750=0.94% 00:10:38.779 lat (msec) : 50=31.60% 00:10:38.779 cpu : usr=0.00%, sys=0.25%, ctx=212, majf=0, minf=1 00:10:38.779 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:38.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.779 complete : 0=0.5%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.779 issued rwts: total=212,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:38.779 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:38.779 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2589728: Wed Jul 24 21:58:17 2024 00:10:38.779 read: IOPS=1166, BW=4663KiB/s (4774kB/s)(11.9MiB/2620msec) 00:10:38.779 slat (nsec): min=6270, max=41200, avg=9638.17, stdev=1842.65 00:10:38.779 clat (usec): min=259, max=41947, avg=839.50, stdev=4271.05 00:10:38.779 lat (usec): min=269, max=41962, avg=849.14, stdev=4271.78 00:10:38.779 clat percentiles (usec): 00:10:38.779 | 1.00th=[ 285], 5.00th=[ 293], 10.00th=[ 302], 20.00th=[ 310], 00:10:38.779 | 30.00th=[ 314], 40.00th=[ 322], 50.00th=[ 334], 60.00th=[ 449], 00:10:38.779 | 70.00th=[ 465], 80.00th=[ 486], 90.00th=[ 506], 95.00th=[ 515], 00:10:38.779 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:10:38.779 | 99.99th=[42206] 00:10:38.779 bw ( KiB/s): min= 96, max=12280, per=36.37%, avg=4881.60, stdev=5126.65, samples=5 00:10:38.779 iops : min= 24, max= 3070, avg=1220.40, stdev=1281.66, samples=5 00:10:38.779 lat (usec) : 500=86.12%, 750=12.67%, 1000=0.03% 00:10:38.779 lat (msec) : 2=0.03%, 50=1.11% 00:10:38.779 cpu : usr=0.84%, sys=1.91%, ctx=3055, majf=0, minf=2 00:10:38.779 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:38.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.779 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.779 issued rwts: total=3055,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:38.779 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:38.779 00:10:38.779 Run status group 0 (all jobs): 00:10:38.779 READ: bw=13.1MiB/s (13.7MB/s), 99.0KiB/s-9.77MiB/s (101kB/s-10.2MB/s), io=42.3MiB (44.4MB), run=2620-3231msec 00:10:38.779 00:10:38.779 Disk stats (read/write): 00:10:38.779 nvme0n1: ios=7194/0, merge=0/0, ticks=3714/0, in_queue=3714, util=99.13% 00:10:38.779 nvme0n2: ios=114/0, merge=0/0, ticks=3523/0, in_queue=3523, util=99.38% 00:10:38.779 nvme0n3: ios=212/0, merge=0/0, ticks=2813/0, in_queue=2813, util=96.01% 00:10:38.779 nvme0n4: ios=3053/0, merge=0/0, ticks=2479/0, in_queue=2479, util=96.45% 00:10:39.037 21:58:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:39.037 21:58:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:39.037 21:58:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:39.037 21:58:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:39.295 21:58:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:39.295 21:58:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:39.552 21:58:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:39.552 21:58:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:39.810 21:58:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:39.810 21:58:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2589423 00:10:39.810 21:58:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:39.810 21:58:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:39.810 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.810 21:58:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:39.810 21:58:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:39.810 21:58:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:39.810 21:58:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:39.810 21:58:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:39.810 21:58:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:39.810 21:58:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:39.810 21:58:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:39.810 21:58:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:39.810 nvmf hotplug test: fio failed as expected 00:10:39.810 21:58:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:40.068 21:58:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:40.068 21:58:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:40.068 21:58:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:40.068 21:58:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:40.068 21:58:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:40.068 21:58:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:40.068 21:58:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:10:40.068 21:58:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:40.069 21:58:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:10:40.069 21:58:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:40.069 21:58:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:40.069 rmmod nvme_tcp 00:10:40.069 rmmod nvme_fabrics 00:10:40.069 rmmod nvme_keyring 00:10:40.069 21:58:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:40.069 21:58:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:10:40.069 21:58:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:10:40.069 21:58:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2586470 ']' 00:10:40.069 21:58:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2586470 00:10:40.069 21:58:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 2586470 ']' 00:10:40.069 21:58:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 2586470 00:10:40.069 21:58:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:10:40.069 21:58:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:40.069 21:58:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2586470 00:10:40.069 21:58:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:40.069 21:58:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:40.069 21:58:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2586470' 00:10:40.069 killing process with pid 2586470 00:10:40.069 21:58:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 2586470 00:10:40.069 21:58:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 2586470 00:10:40.327 21:58:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:40.327 21:58:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:40.327 21:58:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:40.327 21:58:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:40.327 21:58:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:40.327 21:58:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.327 21:58:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:40.327 21:58:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:42.857 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:42.857 00:10:42.857 real 0m28.273s 00:10:42.857 user 2m3.829s 00:10:42.857 sys 0m9.945s 00:10:42.857 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:42.857 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.857 ************************************ 00:10:42.857 END TEST nvmf_fio_target 00:10:42.857 ************************************ 00:10:42.857 21:58:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:42.857 21:58:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:42.857 21:58:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:42.857 21:58:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:42.857 ************************************ 00:10:42.857 START TEST nvmf_bdevio 00:10:42.857 ************************************ 00:10:42.857 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:42.857 * Looking for test storage... 00:10:42.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:42.857 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:42.857 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:42.857 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:42.857 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:42.857 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:42.857 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:42.857 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:42.858 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:42.858 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:42.858 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:42.858 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:42.858 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:42.858 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:10:42.858 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:10:42.858 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:42.858 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:42.858 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:42.858 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:42.858 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:42.858 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:42.858 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:42.858 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:42.858 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.858 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.858 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.858 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:42.858 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.858 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:10:42.858 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:42.858 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:42.858 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:42.858 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:42.858 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:42.858 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:42.858 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:42.858 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:42.858 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:42.858 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:42.858 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:42.858 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:42.858 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:42.858 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:42.858 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:42.858 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:42.858 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.858 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:42.858 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:42.858 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:42.858 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:42.858 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:10:42.858 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:50.974 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:50.974 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:10:50.974 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:50.974 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:50.974 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:50.974 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:50.974 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:50.974 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:10:50.974 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:50.974 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:10:50.974 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:10:50.974 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:10:50.974 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:10:50.974 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:10:50.974 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:10:50.974 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:50.974 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:50.974 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:50.974 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:50.974 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:50.974 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:50.974 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:50.974 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:50.974 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:50.974 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:50.974 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:50.974 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:50.975 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:50.975 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:50.975 Found net devices under 0000:af:00.0: cvl_0_0 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:50.975 Found net devices under 0000:af:00.1: cvl_0_1 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:50.975 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:50.975 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:10:50.975 00:10:50.975 --- 10.0.0.2 ping statistics --- 00:10:50.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.975 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:50.975 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:50.975 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:10:50.975 00:10:50.975 --- 10.0.0.1 ping statistics --- 00:10:50.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.975 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:50.975 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:50.975 21:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2594222 00:10:50.975 21:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:50.975 21:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2594222 00:10:50.975 21:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 2594222 ']' 00:10:50.975 21:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.975 21:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:50.975 21:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.975 21:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:50.975 21:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:50.975 [2024-07-24 21:58:29.055238] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:10:50.975 [2024-07-24 21:58:29.055286] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:50.975 EAL: No free 2048 kB hugepages reported on node 1 00:10:50.975 [2024-07-24 21:58:29.129123] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:50.975 [2024-07-24 21:58:29.196568] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:50.975 [2024-07-24 21:58:29.196611] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:50.975 [2024-07-24 21:58:29.196620] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:50.975 [2024-07-24 21:58:29.196628] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:50.975 [2024-07-24 21:58:29.196651] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:50.975 [2024-07-24 21:58:29.196773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:50.976 [2024-07-24 21:58:29.196865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:10:50.976 [2024-07-24 21:58:29.196953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:50.976 [2024-07-24 21:58:29.196955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:10:50.976 21:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:50.976 21:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:10:50.976 21:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:50.976 21:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:50.976 21:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:50.976 21:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:50.976 21:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:50.976 21:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.976 21:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:50.976 [2024-07-24 21:58:29.916135] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:50.976 21:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.976 21:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:50.976 21:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.976 21:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:50.976 Malloc0 00:10:50.976 21:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.976 21:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:50.976 21:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.976 21:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:50.976 21:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.976 21:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:50.976 21:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.976 21:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:50.976 21:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.976 21:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:50.976 21:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.976 21:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:50.976 [2024-07-24 21:58:29.962662] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:50.976 21:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.976 21:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:50.976 21:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:50.976 21:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:10:50.976 21:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:10:50.976 21:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:50.976 21:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:50.976 { 00:10:50.976 "params": { 00:10:50.976 "name": "Nvme$subsystem", 00:10:50.976 "trtype": "$TEST_TRANSPORT", 00:10:50.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:50.976 "adrfam": "ipv4", 00:10:50.976 "trsvcid": "$NVMF_PORT", 00:10:50.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:50.976 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:50.976 "hdgst": ${hdgst:-false}, 00:10:50.976 "ddgst": ${ddgst:-false} 00:10:50.976 }, 00:10:50.976 "method": "bdev_nvme_attach_controller" 00:10:50.976 } 00:10:50.976 EOF 00:10:50.976 )") 00:10:50.976 21:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:10:50.976 21:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:10:50.976 21:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:10:50.976 21:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:50.976 "params": { 00:10:50.976 "name": "Nvme1", 00:10:50.976 "trtype": "tcp", 00:10:50.976 "traddr": "10.0.0.2", 00:10:50.976 "adrfam": "ipv4", 00:10:50.976 "trsvcid": "4420", 00:10:50.976 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:50.976 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:50.976 "hdgst": false, 00:10:50.976 "ddgst": false 00:10:50.976 }, 00:10:50.976 "method": "bdev_nvme_attach_controller" 00:10:50.976 }' 00:10:50.976 [2024-07-24 21:58:30.011911] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:10:50.976 [2024-07-24 21:58:30.011959] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2594501 ] 00:10:50.976 EAL: No free 2048 kB hugepages reported on node 1 00:10:50.976 [2024-07-24 21:58:30.092546] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:50.976 [2024-07-24 21:58:30.166267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.976 [2024-07-24 21:58:30.166363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:50.976 [2024-07-24 21:58:30.166365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.234 I/O targets: 00:10:51.234 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:51.234 00:10:51.234 00:10:51.234 CUnit - A unit testing framework for C - Version 2.1-3 00:10:51.234 http://cunit.sourceforge.net/ 00:10:51.234 00:10:51.234 00:10:51.234 Suite: bdevio tests on: Nvme1n1 00:10:51.492 Test: blockdev write read block ...passed 00:10:51.492 Test: blockdev write zeroes read block ...passed 00:10:51.492 Test: blockdev write zeroes read no split ...passed 00:10:51.492 Test: blockdev write zeroes read split ...passed 00:10:51.492 Test: blockdev write zeroes read split partial ...passed 00:10:51.492 Test: blockdev reset ...[2024-07-24 21:58:30.577178] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:51.492 [2024-07-24 21:58:30.577241] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bc7810 (9): Bad file descriptor 00:10:51.751 [2024-07-24 21:58:30.755286] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:51.751 passed 00:10:51.751 Test: blockdev write read 8 blocks ...passed 00:10:51.751 Test: blockdev write read size > 128k ...passed 00:10:51.751 Test: blockdev write read invalid size ...passed 00:10:51.751 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:51.751 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:51.751 Test: blockdev write read max offset ...passed 00:10:51.751 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:51.751 Test: blockdev writev readv 8 blocks ...passed 00:10:52.009 Test: blockdev writev readv 30 x 1block ...passed 00:10:52.009 Test: blockdev writev readv block ...passed 00:10:52.009 Test: blockdev writev readv size > 128k ...passed 00:10:52.010 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:52.010 Test: blockdev comparev and writev ...[2024-07-24 21:58:31.015238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.010 [2024-07-24 21:58:31.015268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:52.010 [2024-07-24 21:58:31.015284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.010 [2024-07-24 21:58:31.015294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:52.010 [2024-07-24 21:58:31.015602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.010 [2024-07-24 21:58:31.015614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:52.010 [2024-07-24 21:58:31.015628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.010 [2024-07-24 21:58:31.015638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:52.010 [2024-07-24 21:58:31.015949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.010 [2024-07-24 21:58:31.015961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:52.010 [2024-07-24 21:58:31.015974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.010 [2024-07-24 21:58:31.015983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:52.010 [2024-07-24 21:58:31.016301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.010 [2024-07-24 21:58:31.016312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:52.010 [2024-07-24 21:58:31.016326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.010 [2024-07-24 21:58:31.016339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:52.010 passed 00:10:52.010 Test: blockdev nvme passthru rw ...passed 00:10:52.010 Test: blockdev nvme passthru vendor specific ...[2024-07-24 21:58:31.098241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:52.010 [2024-07-24 21:58:31.098258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:52.010 [2024-07-24 21:58:31.098484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:52.010 [2024-07-24 21:58:31.098495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:52.010 [2024-07-24 21:58:31.098707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:52.010 [2024-07-24 21:58:31.098723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:52.010 [2024-07-24 21:58:31.098929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:52.010 [2024-07-24 21:58:31.098941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:52.010 passed 00:10:52.010 Test: blockdev nvme admin passthru ...passed 00:10:52.010 Test: blockdev copy ...passed 00:10:52.010 00:10:52.010 Run Summary: Type Total Ran Passed Failed Inactive 00:10:52.010 suites 1 1 n/a 0 0 00:10:52.010 tests 23 23 23 0 0 00:10:52.010 asserts 152 152 152 0 n/a 00:10:52.010 00:10:52.010 Elapsed time = 1.502 seconds 00:10:52.269 21:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:52.269 21:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.269 21:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:52.269 21:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.269 21:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:52.269 21:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:52.269 21:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:52.269 21:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:10:52.269 21:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:52.269 21:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:10:52.269 21:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:52.269 21:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:52.269 rmmod nvme_tcp 00:10:52.269 rmmod nvme_fabrics 00:10:52.269 rmmod nvme_keyring 00:10:52.269 21:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:52.269 21:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:10:52.269 21:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:10:52.269 21:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2594222 ']' 00:10:52.269 21:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2594222 00:10:52.269 21:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 2594222 ']' 00:10:52.269 21:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 2594222 00:10:52.269 21:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:10:52.269 21:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:52.269 21:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2594222 00:10:52.269 21:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:10:52.269 21:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:10:52.269 21:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2594222' 00:10:52.269 killing process with pid 2594222 00:10:52.269 21:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 2594222 00:10:52.269 21:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 2594222 00:10:52.531 21:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:52.531 21:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:52.531 21:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:52.531 21:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:52.531 21:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:52.531 21:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:52.531 21:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:52.531 21:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.066 21:58:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:55.066 00:10:55.066 real 0m12.130s 00:10:55.066 user 0m14.304s 00:10:55.066 sys 0m6.183s 00:10:55.066 21:58:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:55.066 21:58:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:55.066 ************************************ 00:10:55.066 END TEST nvmf_bdevio 00:10:55.066 ************************************ 00:10:55.066 21:58:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:55.066 00:10:55.066 real 4m52.271s 00:10:55.066 user 10m49.783s 00:10:55.066 sys 2m1.327s 00:10:55.066 21:58:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:55.066 21:58:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:55.066 ************************************ 00:10:55.066 END TEST nvmf_target_core 00:10:55.066 ************************************ 00:10:55.066 21:58:33 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:55.066 21:58:33 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:55.066 21:58:33 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:55.066 21:58:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:55.066 ************************************ 00:10:55.066 START TEST nvmf_target_extra 00:10:55.066 ************************************ 00:10:55.066 21:58:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:55.066 * Looking for test storage... 00:10:55.066 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:55.066 21:58:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:55.066 21:58:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:55.066 21:58:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:55.066 21:58:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:55.066 21:58:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:55.066 21:58:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:55.066 21:58:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:55.066 21:58:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:55.066 21:58:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:55.066 21:58:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:55.066 21:58:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:55.066 21:58:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:55.066 21:58:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:10:55.066 21:58:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:10:55.066 21:58:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:55.066 21:58:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:55.066 21:58:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:55.066 21:58:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:55.066 21:58:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:55.066 21:58:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:55.066 21:58:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:55.066 21:58:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:55.066 21:58:33 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.066 21:58:33 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.066 21:58:33 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.066 21:58:33 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:55.066 21:58:33 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.066 21:58:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:10:55.066 21:58:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:55.066 21:58:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:55.066 21:58:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:55.066 21:58:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:55.066 21:58:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:55.066 21:58:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:55.066 21:58:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:55.066 21:58:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:55.066 21:58:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:55.066 21:58:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:55.067 21:58:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:55.067 21:58:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:55.067 21:58:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:55.067 21:58:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:55.067 21:58:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:55.067 ************************************ 00:10:55.067 START TEST nvmf_example 00:10:55.067 ************************************ 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:55.067 * Looking for test storage... 00:10:55.067 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:10:55.067 21:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:03.182 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:03.182 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:03.182 Found net devices under 0000:af:00.0: cvl_0_0 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:03.182 Found net devices under 0000:af:00.1: cvl_0_1 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:03.182 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:03.182 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:11:03.182 00:11:03.182 --- 10.0.0.2 ping statistics --- 00:11:03.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.182 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:11:03.182 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:03.182 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:03.182 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:11:03.182 00:11:03.183 --- 10.0.0.1 ping statistics --- 00:11:03.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.183 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:11:03.183 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:03.183 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:11:03.183 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:03.183 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:03.183 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:03.183 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:03.183 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:03.183 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:03.183 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:03.183 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:03.183 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:03.183 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:03.183 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:03.183 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:03.183 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:03.183 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2598512 00:11:03.183 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:03.183 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:03.183 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2598512 00:11:03.183 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 2598512 ']' 00:11:03.183 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.183 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:03.183 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.183 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:03.183 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:03.183 EAL: No free 2048 kB hugepages reported on node 1 00:11:03.183 21:58:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:03.183 21:58:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:11:03.183 21:58:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:03.183 21:58:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:03.183 21:58:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:03.183 21:58:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:03.183 21:58:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.183 21:58:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:03.183 21:58:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.183 21:58:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:03.183 21:58:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.183 21:58:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:03.183 21:58:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.183 21:58:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:03.183 21:58:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:03.183 21:58:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.183 21:58:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:03.183 21:58:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.183 21:58:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:03.183 21:58:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:03.183 21:58:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.183 21:58:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:03.183 21:58:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.183 21:58:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:03.183 21:58:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.183 21:58:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:03.183 21:58:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.183 21:58:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:03.183 21:58:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:03.441 EAL: No free 2048 kB hugepages reported on node 1 00:11:13.412 Initializing NVMe Controllers 00:11:13.412 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:13.412 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:13.412 Initialization complete. Launching workers. 00:11:13.412 ======================================================== 00:11:13.412 Latency(us) 00:11:13.412 Device Information : IOPS MiB/s Average min max 00:11:13.412 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17391.96 67.94 3679.53 651.74 15572.06 00:11:13.412 ======================================================== 00:11:13.412 Total : 17391.96 67.94 3679.53 651.74 15572.06 00:11:13.412 00:11:13.412 21:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:13.412 21:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:13.412 21:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:13.412 21:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:11:13.412 21:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:13.412 21:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:11:13.412 21:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:13.412 21:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:13.412 rmmod nvme_tcp 00:11:13.412 rmmod nvme_fabrics 00:11:13.412 rmmod nvme_keyring 00:11:13.412 21:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:13.412 21:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:11:13.412 21:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:11:13.412 21:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2598512 ']' 00:11:13.412 21:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2598512 00:11:13.412 21:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 2598512 ']' 00:11:13.412 21:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 2598512 00:11:13.412 21:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:11:13.412 21:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:13.412 21:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2598512 00:11:13.670 21:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:11:13.670 21:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:11:13.670 21:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2598512' 00:11:13.670 killing process with pid 2598512 00:11:13.670 21:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 2598512 00:11:13.670 21:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 2598512 00:11:13.670 nvmf threads initialize successfully 00:11:13.670 bdev subsystem init successfully 00:11:13.670 created a nvmf target service 00:11:13.670 create targets's poll groups done 00:11:13.670 all subsystems of target started 00:11:13.670 nvmf target is running 00:11:13.670 all subsystems of target stopped 00:11:13.670 destroy targets's poll groups done 00:11:13.670 destroyed the nvmf target service 00:11:13.670 bdev subsystem finish successfully 00:11:13.670 nvmf threads destroy successfully 00:11:13.670 21:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:13.671 21:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:13.671 21:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:13.671 21:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:13.671 21:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:13.671 21:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:13.671 21:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:13.671 21:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.199 21:58:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:16.199 21:58:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:16.199 21:58:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:16.199 21:58:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:16.199 00:11:16.200 real 0m20.950s 00:11:16.200 user 0m45.359s 00:11:16.200 sys 0m7.665s 00:11:16.200 21:58:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:16.200 21:58:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:16.200 ************************************ 00:11:16.200 END TEST nvmf_example 00:11:16.200 ************************************ 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:16.200 ************************************ 00:11:16.200 START TEST nvmf_filesystem 00:11:16.200 ************************************ 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:16.200 * Looking for test storage... 00:11:16.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:16.200 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:16.200 #define SPDK_CONFIG_H 00:11:16.200 #define SPDK_CONFIG_APPS 1 00:11:16.200 #define SPDK_CONFIG_ARCH native 00:11:16.200 #undef SPDK_CONFIG_ASAN 00:11:16.200 #undef SPDK_CONFIG_AVAHI 00:11:16.200 #undef SPDK_CONFIG_CET 00:11:16.200 #define SPDK_CONFIG_COVERAGE 1 00:11:16.200 #define SPDK_CONFIG_CROSS_PREFIX 00:11:16.200 #undef SPDK_CONFIG_CRYPTO 00:11:16.200 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:16.200 #undef SPDK_CONFIG_CUSTOMOCF 00:11:16.200 #undef SPDK_CONFIG_DAOS 00:11:16.200 #define SPDK_CONFIG_DAOS_DIR 00:11:16.200 #define SPDK_CONFIG_DEBUG 1 00:11:16.200 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:16.200 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:16.200 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:16.200 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:16.201 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:16.201 #undef SPDK_CONFIG_DPDK_UADK 00:11:16.201 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:16.201 #define SPDK_CONFIG_EXAMPLES 1 00:11:16.201 #undef SPDK_CONFIG_FC 00:11:16.201 #define SPDK_CONFIG_FC_PATH 00:11:16.201 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:16.201 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:16.201 #undef SPDK_CONFIG_FUSE 00:11:16.201 #undef SPDK_CONFIG_FUZZER 00:11:16.201 #define SPDK_CONFIG_FUZZER_LIB 00:11:16.201 #undef SPDK_CONFIG_GOLANG 00:11:16.201 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:16.201 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:16.201 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:16.201 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:16.201 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:16.201 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:16.201 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:16.201 #define SPDK_CONFIG_IDXD 1 00:11:16.201 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:16.201 #undef SPDK_CONFIG_IPSEC_MB 00:11:16.201 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:16.201 #define SPDK_CONFIG_ISAL 1 00:11:16.201 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:16.201 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:16.201 #define SPDK_CONFIG_LIBDIR 00:11:16.201 #undef SPDK_CONFIG_LTO 00:11:16.201 #define SPDK_CONFIG_MAX_LCORES 128 00:11:16.201 #define SPDK_CONFIG_NVME_CUSE 1 00:11:16.201 #undef SPDK_CONFIG_OCF 00:11:16.201 #define SPDK_CONFIG_OCF_PATH 00:11:16.201 #define SPDK_CONFIG_OPENSSL_PATH 00:11:16.201 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:16.201 #define SPDK_CONFIG_PGO_DIR 00:11:16.201 #undef SPDK_CONFIG_PGO_USE 00:11:16.201 #define SPDK_CONFIG_PREFIX /usr/local 00:11:16.201 #undef SPDK_CONFIG_RAID5F 00:11:16.201 #undef SPDK_CONFIG_RBD 00:11:16.201 #define SPDK_CONFIG_RDMA 1 00:11:16.201 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:16.201 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:16.201 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:16.201 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:16.201 #define SPDK_CONFIG_SHARED 1 00:11:16.201 #undef SPDK_CONFIG_SMA 00:11:16.201 #define SPDK_CONFIG_TESTS 1 00:11:16.201 #undef SPDK_CONFIG_TSAN 00:11:16.201 #define SPDK_CONFIG_UBLK 1 00:11:16.201 #define SPDK_CONFIG_UBSAN 1 00:11:16.201 #undef SPDK_CONFIG_UNIT_TESTS 00:11:16.201 #undef SPDK_CONFIG_URING 00:11:16.201 #define SPDK_CONFIG_URING_PATH 00:11:16.201 #undef SPDK_CONFIG_URING_ZNS 00:11:16.201 #undef SPDK_CONFIG_USDT 00:11:16.201 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:16.201 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:16.201 #define SPDK_CONFIG_VFIO_USER 1 00:11:16.201 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:16.201 #define SPDK_CONFIG_VHOST 1 00:11:16.201 #define SPDK_CONFIG_VIRTIO 1 00:11:16.201 #undef SPDK_CONFIG_VTUNE 00:11:16.201 #define SPDK_CONFIG_VTUNE_DIR 00:11:16.201 #define SPDK_CONFIG_WERROR 1 00:11:16.201 #define SPDK_CONFIG_WPDK_DIR 00:11:16.201 #undef SPDK_CONFIG_XNVME 00:11:16.201 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:16.201 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # cat 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j112 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@302 -- # for i in "$@" 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@303 -- # case "$i" in 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # TEST_TRANSPORT=tcp 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # [[ -z 2600977 ]] 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # kill -0 2600977 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.vXxFnY 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.vXxFnY/tests/target /tmp/spdk.vXxFnY 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # df -T 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:11:16.202 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=955215872 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4329213952 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=55290368000 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=61742276608 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=6451908608 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=30861217792 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871138304 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=9920512 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=12325425152 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=12348456960 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=23031808 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=30870241280 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871138304 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=897024 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=6174220288 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=6174224384 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:11:16.203 * Looking for test storage... 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mount=/ 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # target_space=55290368000 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # new_size=8666501120 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:16.203 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # return 0 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:16.203 21:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:23.329 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:23.329 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:23.329 Found net devices under 0000:af:00.0: cvl_0_0 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:23.329 Found net devices under 0000:af:00.1: cvl_0_1 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:23.329 21:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:23.329 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:23.329 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:23.329 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:23.329 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:23.329 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:23.329 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:23.329 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:23.329 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:23.329 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:11:23.329 00:11:23.329 --- 10.0.0.2 ping statistics --- 00:11:23.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.329 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:11:23.329 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:23.329 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:23.329 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:11:23.329 00:11:23.329 --- 10.0.0.1 ping statistics --- 00:11:23.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.329 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:11:23.329 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:23.329 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:11:23.329 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:23.329 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:23.329 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:23.329 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:23.329 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:23.329 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:23.329 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:23.329 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:23.329 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:23.329 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:23.329 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:23.329 ************************************ 00:11:23.329 START TEST nvmf_filesystem_no_in_capsule 00:11:23.329 ************************************ 00:11:23.329 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:11:23.329 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:23.329 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:23.329 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:23.329 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:23.329 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:23.329 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2604300 00:11:23.329 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:23.329 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2604300 00:11:23.329 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 2604300 ']' 00:11:23.329 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.329 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:23.329 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.329 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:23.329 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:23.329 [2024-07-24 21:59:02.398552] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:11:23.329 [2024-07-24 21:59:02.398596] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:23.329 EAL: No free 2048 kB hugepages reported on node 1 00:11:23.329 [2024-07-24 21:59:02.478900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:23.586 [2024-07-24 21:59:02.571229] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:23.586 [2024-07-24 21:59:02.571267] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:23.586 [2024-07-24 21:59:02.571276] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:23.586 [2024-07-24 21:59:02.571285] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:23.586 [2024-07-24 21:59:02.571292] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:23.586 [2024-07-24 21:59:02.571388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:23.586 [2024-07-24 21:59:02.571500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:23.586 [2024-07-24 21:59:02.571585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:23.586 [2024-07-24 21:59:02.571587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.150 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:24.150 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:24.150 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:24.150 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:24.150 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:24.150 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:24.150 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:24.150 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:24.150 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.150 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:24.150 [2024-07-24 21:59:03.290202] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:24.150 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.150 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:24.150 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.150 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:24.407 Malloc1 00:11:24.407 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.407 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:24.407 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.407 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:24.407 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.407 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:24.407 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.407 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:24.407 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.407 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:24.407 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.407 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:24.407 [2024-07-24 21:59:03.453150] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:24.407 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.407 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:24.407 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:24.407 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:24.407 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:24.407 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:24.407 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:24.407 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.407 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:24.407 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.407 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:24.407 { 00:11:24.407 "name": "Malloc1", 00:11:24.407 "aliases": [ 00:11:24.407 "280ca88c-bf5b-4512-907a-da01c0be8c58" 00:11:24.407 ], 00:11:24.407 "product_name": "Malloc disk", 00:11:24.407 "block_size": 512, 00:11:24.407 "num_blocks": 1048576, 00:11:24.407 "uuid": "280ca88c-bf5b-4512-907a-da01c0be8c58", 00:11:24.407 "assigned_rate_limits": { 00:11:24.407 "rw_ios_per_sec": 0, 00:11:24.407 "rw_mbytes_per_sec": 0, 00:11:24.407 "r_mbytes_per_sec": 0, 00:11:24.407 "w_mbytes_per_sec": 0 00:11:24.407 }, 00:11:24.407 "claimed": true, 00:11:24.407 "claim_type": "exclusive_write", 00:11:24.407 "zoned": false, 00:11:24.407 "supported_io_types": { 00:11:24.407 "read": true, 00:11:24.407 "write": true, 00:11:24.407 "unmap": true, 00:11:24.407 "flush": true, 00:11:24.407 "reset": true, 00:11:24.407 "nvme_admin": false, 00:11:24.407 "nvme_io": false, 00:11:24.407 "nvme_io_md": false, 00:11:24.407 "write_zeroes": true, 00:11:24.407 "zcopy": true, 00:11:24.407 "get_zone_info": false, 00:11:24.407 "zone_management": false, 00:11:24.407 "zone_append": false, 00:11:24.407 "compare": false, 00:11:24.407 "compare_and_write": false, 00:11:24.407 "abort": true, 00:11:24.407 "seek_hole": false, 00:11:24.407 "seek_data": false, 00:11:24.407 "copy": true, 00:11:24.407 "nvme_iov_md": false 00:11:24.407 }, 00:11:24.407 "memory_domains": [ 00:11:24.407 { 00:11:24.407 "dma_device_id": "system", 00:11:24.407 "dma_device_type": 1 00:11:24.407 }, 00:11:24.407 { 00:11:24.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.407 "dma_device_type": 2 00:11:24.407 } 00:11:24.407 ], 00:11:24.407 "driver_specific": {} 00:11:24.407 } 00:11:24.407 ]' 00:11:24.407 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:24.407 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:24.407 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:24.407 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:24.407 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:24.407 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:24.407 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:24.407 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:25.775 21:59:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:25.775 21:59:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:25.775 21:59:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:25.775 21:59:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:25.775 21:59:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:27.667 21:59:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:27.667 21:59:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:27.667 21:59:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:27.667 21:59:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:27.667 21:59:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:27.667 21:59:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:27.667 21:59:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:27.667 21:59:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:27.924 21:59:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:27.924 21:59:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:27.924 21:59:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:27.924 21:59:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:27.924 21:59:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:27.924 21:59:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:27.924 21:59:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:27.924 21:59:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:27.924 21:59:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:28.181 21:59:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:28.181 21:59:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:29.550 21:59:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:29.550 21:59:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:29.550 21:59:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:29.550 21:59:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:29.550 21:59:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.550 ************************************ 00:11:29.550 START TEST filesystem_ext4 00:11:29.550 ************************************ 00:11:29.550 21:59:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:29.550 21:59:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:29.550 21:59:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:29.550 21:59:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:29.550 21:59:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:29.550 21:59:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:29.550 21:59:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:29.550 21:59:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:29.550 21:59:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:29.550 21:59:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:29.550 21:59:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:29.550 mke2fs 1.46.5 (30-Dec-2021) 00:11:29.550 Discarding device blocks: 0/522240 done 00:11:29.550 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:29.550 Filesystem UUID: 4079fb83-8d5c-4a12-9410-e2517d486290 00:11:29.550 Superblock backups stored on blocks: 00:11:29.550 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:29.550 00:11:29.550 Allocating group tables: 0/64 done 00:11:29.550 Writing inode tables: 0/64 done 00:11:29.550 Creating journal (8192 blocks): done 00:11:30.627 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:11:30.627 00:11:30.627 21:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:30.627 21:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:30.627 21:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:30.884 21:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:30.884 21:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:30.884 21:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:30.884 21:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:30.884 21:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:30.884 21:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2604300 00:11:30.884 21:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:30.884 21:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:30.884 21:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:30.884 21:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:30.884 00:11:30.884 real 0m1.476s 00:11:30.884 user 0m0.030s 00:11:30.884 sys 0m0.078s 00:11:30.884 21:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:30.884 21:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:30.884 ************************************ 00:11:30.884 END TEST filesystem_ext4 00:11:30.884 ************************************ 00:11:30.884 21:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:30.884 21:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:30.884 21:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:30.884 21:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:30.884 ************************************ 00:11:30.884 START TEST filesystem_btrfs 00:11:30.884 ************************************ 00:11:30.884 21:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:30.884 21:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:30.884 21:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:30.884 21:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:30.884 21:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:30.884 21:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:30.884 21:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:30.884 21:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:30.884 21:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:30.884 21:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:30.884 21:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:31.141 btrfs-progs v6.6.2 00:11:31.141 See https://btrfs.readthedocs.io for more information. 00:11:31.141 00:11:31.141 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:31.141 NOTE: several default settings have changed in version 5.15, please make sure 00:11:31.141 this does not affect your deployments: 00:11:31.141 - DUP for metadata (-m dup) 00:11:31.141 - enabled no-holes (-O no-holes) 00:11:31.141 - enabled free-space-tree (-R free-space-tree) 00:11:31.141 00:11:31.141 Label: (null) 00:11:31.141 UUID: 544784e4-cb73-4ec9-ab25-d3e6e31ccb89 00:11:31.141 Node size: 16384 00:11:31.141 Sector size: 4096 00:11:31.141 Filesystem size: 510.00MiB 00:11:31.141 Block group profiles: 00:11:31.142 Data: single 8.00MiB 00:11:31.142 Metadata: DUP 32.00MiB 00:11:31.142 System: DUP 8.00MiB 00:11:31.142 SSD detected: yes 00:11:31.142 Zoned device: no 00:11:31.142 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:31.142 Runtime features: free-space-tree 00:11:31.142 Checksum: crc32c 00:11:31.142 Number of devices: 1 00:11:31.142 Devices: 00:11:31.142 ID SIZE PATH 00:11:31.142 1 510.00MiB /dev/nvme0n1p1 00:11:31.142 00:11:31.142 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:31.142 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:31.399 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:31.399 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:31.399 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:31.399 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:31.399 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:31.399 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:31.399 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2604300 00:11:31.399 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:31.399 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:31.399 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:31.399 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:31.399 00:11:31.399 real 0m0.614s 00:11:31.399 user 0m0.028s 00:11:31.399 sys 0m0.140s 00:11:31.399 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:31.399 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:31.399 ************************************ 00:11:31.399 END TEST filesystem_btrfs 00:11:31.399 ************************************ 00:11:31.656 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:31.656 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:31.656 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:31.656 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.656 ************************************ 00:11:31.656 START TEST filesystem_xfs 00:11:31.656 ************************************ 00:11:31.656 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:31.656 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:31.656 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:31.656 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:31.656 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:31.656 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:31.656 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:31.656 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:11:31.656 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:31.656 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:31.656 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:31.656 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:31.656 = sectsz=512 attr=2, projid32bit=1 00:11:31.656 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:31.656 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:31.656 data = bsize=4096 blocks=130560, imaxpct=25 00:11:31.656 = sunit=0 swidth=0 blks 00:11:31.656 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:31.656 log =internal log bsize=4096 blocks=16384, version=2 00:11:31.656 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:31.656 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:32.584 Discarding blocks...Done. 00:11:32.584 21:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:32.584 21:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:34.476 21:59:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:34.476 21:59:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:34.476 21:59:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:34.476 21:59:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:34.476 21:59:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:34.476 21:59:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:34.476 21:59:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2604300 00:11:34.476 21:59:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:34.476 21:59:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:34.476 21:59:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:34.476 21:59:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:34.476 00:11:34.476 real 0m2.943s 00:11:34.476 user 0m0.024s 00:11:34.476 sys 0m0.087s 00:11:34.476 21:59:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:34.476 21:59:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:34.476 ************************************ 00:11:34.476 END TEST filesystem_xfs 00:11:34.476 ************************************ 00:11:34.476 21:59:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:34.734 21:59:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:34.734 21:59:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:34.734 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.734 21:59:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:34.734 21:59:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:34.734 21:59:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:34.734 21:59:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:34.734 21:59:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:34.734 21:59:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:34.734 21:59:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:34.734 21:59:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:34.734 21:59:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.734 21:59:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.734 21:59:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.734 21:59:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:34.734 21:59:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2604300 00:11:34.734 21:59:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 2604300 ']' 00:11:34.734 21:59:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 2604300 00:11:34.734 21:59:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:34.734 21:59:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:34.734 21:59:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2604300 00:11:34.992 21:59:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:34.992 21:59:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:34.992 21:59:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2604300' 00:11:34.992 killing process with pid 2604300 00:11:34.992 21:59:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 2604300 00:11:34.992 21:59:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 2604300 00:11:35.249 21:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:35.249 00:11:35.249 real 0m11.997s 00:11:35.249 user 0m46.705s 00:11:35.249 sys 0m1.753s 00:11:35.249 21:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:35.249 21:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.249 ************************************ 00:11:35.249 END TEST nvmf_filesystem_no_in_capsule 00:11:35.249 ************************************ 00:11:35.249 21:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:35.249 21:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:35.249 21:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:35.249 21:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:35.249 ************************************ 00:11:35.249 START TEST nvmf_filesystem_in_capsule 00:11:35.249 ************************************ 00:11:35.249 21:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:11:35.249 21:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:35.249 21:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:35.249 21:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:35.249 21:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:35.249 21:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.249 21:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2606416 00:11:35.249 21:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2606416 00:11:35.249 21:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:35.249 21:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 2606416 ']' 00:11:35.249 21:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.249 21:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:35.250 21:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.250 21:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:35.250 21:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.507 [2024-07-24 21:59:14.470275] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:11:35.507 [2024-07-24 21:59:14.470320] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:35.507 EAL: No free 2048 kB hugepages reported on node 1 00:11:35.507 [2024-07-24 21:59:14.544932] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:35.507 [2024-07-24 21:59:14.620036] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:35.507 [2024-07-24 21:59:14.620074] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:35.507 [2024-07-24 21:59:14.620084] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:35.507 [2024-07-24 21:59:14.620092] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:35.507 [2024-07-24 21:59:14.620116] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:35.507 [2024-07-24 21:59:14.620163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:35.507 [2024-07-24 21:59:14.620254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:35.507 [2024-07-24 21:59:14.620337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:35.507 [2024-07-24 21:59:14.620339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.070 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:36.070 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:36.070 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:36.367 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:36.367 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.367 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:36.367 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:36.367 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:36.367 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.367 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.367 [2024-07-24 21:59:15.336076] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:36.367 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.367 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:36.367 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.367 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.367 Malloc1 00:11:36.367 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.367 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:36.368 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.368 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.368 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.368 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:36.368 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.368 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.368 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.368 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:36.368 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.368 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.368 [2024-07-24 21:59:15.492729] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:36.368 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.368 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:36.368 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:36.368 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:36.368 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:36.368 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:36.368 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:36.368 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.368 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.368 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.368 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:36.368 { 00:11:36.368 "name": "Malloc1", 00:11:36.368 "aliases": [ 00:11:36.368 "33508e29-efa5-4eed-b7b7-105433a5a40e" 00:11:36.368 ], 00:11:36.368 "product_name": "Malloc disk", 00:11:36.368 "block_size": 512, 00:11:36.368 "num_blocks": 1048576, 00:11:36.368 "uuid": "33508e29-efa5-4eed-b7b7-105433a5a40e", 00:11:36.368 "assigned_rate_limits": { 00:11:36.368 "rw_ios_per_sec": 0, 00:11:36.368 "rw_mbytes_per_sec": 0, 00:11:36.368 "r_mbytes_per_sec": 0, 00:11:36.368 "w_mbytes_per_sec": 0 00:11:36.368 }, 00:11:36.368 "claimed": true, 00:11:36.368 "claim_type": "exclusive_write", 00:11:36.368 "zoned": false, 00:11:36.368 "supported_io_types": { 00:11:36.368 "read": true, 00:11:36.368 "write": true, 00:11:36.368 "unmap": true, 00:11:36.368 "flush": true, 00:11:36.368 "reset": true, 00:11:36.368 "nvme_admin": false, 00:11:36.368 "nvme_io": false, 00:11:36.368 "nvme_io_md": false, 00:11:36.368 "write_zeroes": true, 00:11:36.368 "zcopy": true, 00:11:36.368 "get_zone_info": false, 00:11:36.368 "zone_management": false, 00:11:36.368 "zone_append": false, 00:11:36.368 "compare": false, 00:11:36.368 "compare_and_write": false, 00:11:36.368 "abort": true, 00:11:36.368 "seek_hole": false, 00:11:36.368 "seek_data": false, 00:11:36.368 "copy": true, 00:11:36.368 "nvme_iov_md": false 00:11:36.368 }, 00:11:36.368 "memory_domains": [ 00:11:36.368 { 00:11:36.368 "dma_device_id": "system", 00:11:36.368 "dma_device_type": 1 00:11:36.368 }, 00:11:36.368 { 00:11:36.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.368 "dma_device_type": 2 00:11:36.368 } 00:11:36.368 ], 00:11:36.368 "driver_specific": {} 00:11:36.368 } 00:11:36.368 ]' 00:11:36.368 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:36.626 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:36.626 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:36.626 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:36.626 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:36.626 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:36.626 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:36.626 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:37.999 21:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:37.999 21:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:37.999 21:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:37.999 21:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:37.999 21:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:39.892 21:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:39.892 21:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:39.892 21:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:39.892 21:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:39.892 21:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:39.892 21:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:39.892 21:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:39.892 21:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:39.892 21:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:39.892 21:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:39.892 21:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:39.892 21:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:39.892 21:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:39.892 21:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:39.892 21:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:39.892 21:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:39.892 21:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:40.149 21:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:40.713 21:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:41.644 21:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:41.644 21:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:41.644 21:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:41.644 21:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:41.644 21:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:41.644 ************************************ 00:11:41.644 START TEST filesystem_in_capsule_ext4 00:11:41.644 ************************************ 00:11:41.644 21:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:41.644 21:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:41.644 21:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:41.644 21:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:41.644 21:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:41.644 21:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:41.644 21:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:41.644 21:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:41.644 21:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:41.644 21:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:41.644 21:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:41.644 mke2fs 1.46.5 (30-Dec-2021) 00:11:41.644 Discarding device blocks: 0/522240 done 00:11:41.644 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:41.644 Filesystem UUID: d1a60c67-2171-48da-bc54-28ab075a4e5e 00:11:41.644 Superblock backups stored on blocks: 00:11:41.644 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:41.644 00:11:41.644 Allocating group tables: 0/64 done 00:11:41.644 Writing inode tables: 0/64 done 00:11:44.918 Creating journal (8192 blocks): done 00:11:44.919 Writing superblocks and filesystem accounting information: 0/64 done 00:11:44.919 00:11:44.919 21:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:44.919 21:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:45.176 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:45.176 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:45.176 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:45.176 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:45.176 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:45.176 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:45.433 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2606416 00:11:45.433 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:45.433 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:45.433 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:45.433 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:45.433 00:11:45.433 real 0m3.744s 00:11:45.433 user 0m0.026s 00:11:45.433 sys 0m0.083s 00:11:45.433 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:45.433 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:45.433 ************************************ 00:11:45.433 END TEST filesystem_in_capsule_ext4 00:11:45.433 ************************************ 00:11:45.433 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:45.433 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:45.433 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:45.433 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:45.433 ************************************ 00:11:45.433 START TEST filesystem_in_capsule_btrfs 00:11:45.433 ************************************ 00:11:45.433 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:45.433 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:45.433 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:45.433 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:45.433 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:45.433 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:45.433 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:45.433 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:45.433 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:45.433 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:45.433 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:45.690 btrfs-progs v6.6.2 00:11:45.690 See https://btrfs.readthedocs.io for more information. 00:11:45.690 00:11:45.690 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:45.690 NOTE: several default settings have changed in version 5.15, please make sure 00:11:45.690 this does not affect your deployments: 00:11:45.690 - DUP for metadata (-m dup) 00:11:45.690 - enabled no-holes (-O no-holes) 00:11:45.690 - enabled free-space-tree (-R free-space-tree) 00:11:45.690 00:11:45.690 Label: (null) 00:11:45.690 UUID: dfeb4e64-a449-4637-b569-6a25af39fd8a 00:11:45.690 Node size: 16384 00:11:45.690 Sector size: 4096 00:11:45.690 Filesystem size: 510.00MiB 00:11:45.690 Block group profiles: 00:11:45.690 Data: single 8.00MiB 00:11:45.690 Metadata: DUP 32.00MiB 00:11:45.690 System: DUP 8.00MiB 00:11:45.690 SSD detected: yes 00:11:45.690 Zoned device: no 00:11:45.690 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:45.690 Runtime features: free-space-tree 00:11:45.690 Checksum: crc32c 00:11:45.690 Number of devices: 1 00:11:45.690 Devices: 00:11:45.690 ID SIZE PATH 00:11:45.690 1 510.00MiB /dev/nvme0n1p1 00:11:45.690 00:11:45.690 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:45.690 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:46.618 21:59:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:46.618 21:59:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:46.618 21:59:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:46.618 21:59:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:46.618 21:59:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:46.618 21:59:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:46.875 21:59:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2606416 00:11:46.875 21:59:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:46.875 21:59:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:46.875 21:59:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:46.875 21:59:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:46.875 00:11:46.875 real 0m1.355s 00:11:46.875 user 0m0.029s 00:11:46.875 sys 0m0.140s 00:11:46.875 21:59:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:46.875 21:59:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:46.875 ************************************ 00:11:46.875 END TEST filesystem_in_capsule_btrfs 00:11:46.875 ************************************ 00:11:46.875 21:59:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:46.875 21:59:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:46.875 21:59:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:46.875 21:59:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.875 ************************************ 00:11:46.875 START TEST filesystem_in_capsule_xfs 00:11:46.875 ************************************ 00:11:46.875 21:59:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:46.875 21:59:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:46.875 21:59:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:46.875 21:59:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:46.875 21:59:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:46.875 21:59:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:46.875 21:59:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:46.875 21:59:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:11:46.875 21:59:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:46.875 21:59:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:46.875 21:59:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:46.875 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:46.876 = sectsz=512 attr=2, projid32bit=1 00:11:46.876 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:46.876 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:46.876 data = bsize=4096 blocks=130560, imaxpct=25 00:11:46.876 = sunit=0 swidth=0 blks 00:11:46.876 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:46.876 log =internal log bsize=4096 blocks=16384, version=2 00:11:46.876 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:46.876 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:48.242 Discarding blocks...Done. 00:11:48.242 21:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:48.242 21:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:50.134 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:50.135 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:50.135 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:50.135 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:50.391 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:50.391 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:50.391 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2606416 00:11:50.391 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:50.391 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:50.391 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:50.391 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:50.391 00:11:50.391 real 0m3.451s 00:11:50.391 user 0m0.035s 00:11:50.391 sys 0m0.079s 00:11:50.391 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:50.391 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:50.391 ************************************ 00:11:50.391 END TEST filesystem_in_capsule_xfs 00:11:50.391 ************************************ 00:11:50.391 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:50.648 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:50.648 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:50.648 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.648 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:50.648 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:50.648 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:50.648 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:50.648 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:50.648 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:50.905 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:50.905 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:50.905 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.905 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:50.905 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.905 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:50.905 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2606416 00:11:50.905 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 2606416 ']' 00:11:50.905 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 2606416 00:11:50.905 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:50.905 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:50.905 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2606416 00:11:50.905 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:50.905 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:50.905 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2606416' 00:11:50.905 killing process with pid 2606416 00:11:50.905 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 2606416 00:11:50.905 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 2606416 00:11:51.163 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:51.163 00:11:51.163 real 0m15.870s 00:11:51.163 user 1m1.988s 00:11:51.163 sys 0m1.985s 00:11:51.163 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:51.163 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.163 ************************************ 00:11:51.163 END TEST nvmf_filesystem_in_capsule 00:11:51.163 ************************************ 00:11:51.163 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:51.163 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:51.163 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:11:51.163 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:51.163 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:11:51.163 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:51.163 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:51.163 rmmod nvme_tcp 00:11:51.163 rmmod nvme_fabrics 00:11:51.421 rmmod nvme_keyring 00:11:51.421 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:51.421 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:11:51.421 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:11:51.421 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:11:51.421 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:51.421 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:51.421 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:51.421 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:51.421 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:51.421 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:51.421 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:51.421 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:53.368 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:53.368 00:11:53.368 real 0m37.467s 00:11:53.368 user 1m50.840s 00:11:53.368 sys 0m9.217s 00:11:53.368 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:53.368 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:53.368 ************************************ 00:11:53.368 END TEST nvmf_filesystem 00:11:53.368 ************************************ 00:11:53.368 21:59:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:53.368 21:59:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:53.368 21:59:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:53.368 21:59:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:53.627 ************************************ 00:11:53.627 START TEST nvmf_target_discovery 00:11:53.627 ************************************ 00:11:53.627 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:53.627 * Looking for test storage... 00:11:53.627 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:53.627 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:53.627 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:53.627 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:53.627 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:53.627 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:53.627 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:53.627 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:53.627 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:53.627 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:53.627 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:53.627 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:53.627 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:53.627 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:53.627 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:11:53.627 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:53.627 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:53.627 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:53.627 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:53.627 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:53.627 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:53.627 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:53.627 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:53.627 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.627 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.627 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.627 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:53.627 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.627 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:11:53.627 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:53.627 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:53.627 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:53.627 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:53.627 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:53.627 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:53.627 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:53.627 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:53.628 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:53.628 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:53.628 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:53.628 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:53.628 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:53.628 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:53.628 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:53.628 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:53.628 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:53.628 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:53.628 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:53.628 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:53.628 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:53.628 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:53.628 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:53.628 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:11:53.628 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:01.748 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:01.748 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:01.748 Found net devices under 0000:af:00.0: cvl_0_0 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:01.748 Found net devices under 0000:af:00.1: cvl_0_1 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:01.748 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:01.749 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:01.749 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:01.749 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:01.749 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:01.749 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:01.749 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:01.749 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:01.749 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:01.749 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:01.749 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:01.749 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:01.749 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:01.749 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:12:01.749 00:12:01.749 --- 10.0.0.2 ping statistics --- 00:12:01.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:01.749 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:12:01.749 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:01.749 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:01.749 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:12:01.749 00:12:01.749 --- 10.0.0.1 ping statistics --- 00:12:01.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:01.749 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:12:01.749 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:01.749 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:12:01.749 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:01.749 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:01.749 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:01.749 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:01.749 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:01.749 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:01.749 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:01.749 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:01.749 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:01.749 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:01.749 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.749 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2613000 00:12:01.749 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2613000 00:12:01.749 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:01.749 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 2613000 ']' 00:12:01.749 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.749 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:01.749 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.749 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:01.749 21:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.749 [2024-07-24 21:59:39.878851] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:12:01.749 [2024-07-24 21:59:39.878897] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:01.749 EAL: No free 2048 kB hugepages reported on node 1 00:12:01.749 [2024-07-24 21:59:39.952161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:01.749 [2024-07-24 21:59:40.031019] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:01.749 [2024-07-24 21:59:40.031059] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:01.749 [2024-07-24 21:59:40.031070] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:01.749 [2024-07-24 21:59:40.031079] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:01.749 [2024-07-24 21:59:40.031086] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:01.749 [2024-07-24 21:59:40.031143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:01.749 [2024-07-24 21:59:40.031234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:01.749 [2024-07-24 21:59:40.031319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:01.749 [2024-07-24 21:59:40.031320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.749 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:01.749 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:12:01.749 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:01.749 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:01.749 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.749 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:01.749 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:01.749 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.749 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.749 [2024-07-24 21:59:40.738110] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:01.749 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.749 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:01.749 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:01.749 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:01.749 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.749 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.749 Null1 00:12:01.749 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.749 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:01.749 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.749 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.750 [2024-07-24 21:59:40.790453] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.750 Null2 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.750 Null3 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.750 Null4 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.750 21:59:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 4420 00:12:02.010 00:12:02.010 Discovery Log Number of Records 6, Generation counter 6 00:12:02.010 =====Discovery Log Entry 0====== 00:12:02.010 trtype: tcp 00:12:02.010 adrfam: ipv4 00:12:02.010 subtype: current discovery subsystem 00:12:02.010 treq: not required 00:12:02.010 portid: 0 00:12:02.010 trsvcid: 4420 00:12:02.010 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:02.010 traddr: 10.0.0.2 00:12:02.010 eflags: explicit discovery connections, duplicate discovery information 00:12:02.010 sectype: none 00:12:02.010 =====Discovery Log Entry 1====== 00:12:02.010 trtype: tcp 00:12:02.010 adrfam: ipv4 00:12:02.010 subtype: nvme subsystem 00:12:02.010 treq: not required 00:12:02.010 portid: 0 00:12:02.010 trsvcid: 4420 00:12:02.010 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:02.010 traddr: 10.0.0.2 00:12:02.010 eflags: none 00:12:02.010 sectype: none 00:12:02.010 =====Discovery Log Entry 2====== 00:12:02.010 trtype: tcp 00:12:02.010 adrfam: ipv4 00:12:02.010 subtype: nvme subsystem 00:12:02.010 treq: not required 00:12:02.010 portid: 0 00:12:02.010 trsvcid: 4420 00:12:02.010 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:02.010 traddr: 10.0.0.2 00:12:02.010 eflags: none 00:12:02.010 sectype: none 00:12:02.010 =====Discovery Log Entry 3====== 00:12:02.010 trtype: tcp 00:12:02.010 adrfam: ipv4 00:12:02.010 subtype: nvme subsystem 00:12:02.010 treq: not required 00:12:02.010 portid: 0 00:12:02.010 trsvcid: 4420 00:12:02.010 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:02.010 traddr: 10.0.0.2 00:12:02.010 eflags: none 00:12:02.010 sectype: none 00:12:02.010 =====Discovery Log Entry 4====== 00:12:02.010 trtype: tcp 00:12:02.010 adrfam: ipv4 00:12:02.010 subtype: nvme subsystem 00:12:02.010 treq: not required 00:12:02.010 portid: 0 00:12:02.010 trsvcid: 4420 00:12:02.010 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:02.010 traddr: 10.0.0.2 00:12:02.010 eflags: none 00:12:02.010 sectype: none 00:12:02.010 =====Discovery Log Entry 5====== 00:12:02.010 trtype: tcp 00:12:02.010 adrfam: ipv4 00:12:02.010 subtype: discovery subsystem referral 00:12:02.010 treq: not required 00:12:02.010 portid: 0 00:12:02.010 trsvcid: 4430 00:12:02.010 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:02.010 traddr: 10.0.0.2 00:12:02.010 eflags: none 00:12:02.010 sectype: none 00:12:02.010 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:02.010 Perform nvmf subsystem discovery via RPC 00:12:02.010 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:02.011 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.011 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:02.011 [ 00:12:02.011 { 00:12:02.011 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:02.011 "subtype": "Discovery", 00:12:02.011 "listen_addresses": [ 00:12:02.011 { 00:12:02.011 "trtype": "TCP", 00:12:02.011 "adrfam": "IPv4", 00:12:02.011 "traddr": "10.0.0.2", 00:12:02.011 "trsvcid": "4420" 00:12:02.011 } 00:12:02.011 ], 00:12:02.011 "allow_any_host": true, 00:12:02.011 "hosts": [] 00:12:02.011 }, 00:12:02.011 { 00:12:02.011 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:02.011 "subtype": "NVMe", 00:12:02.011 "listen_addresses": [ 00:12:02.011 { 00:12:02.011 "trtype": "TCP", 00:12:02.011 "adrfam": "IPv4", 00:12:02.011 "traddr": "10.0.0.2", 00:12:02.011 "trsvcid": "4420" 00:12:02.011 } 00:12:02.011 ], 00:12:02.011 "allow_any_host": true, 00:12:02.011 "hosts": [], 00:12:02.011 "serial_number": "SPDK00000000000001", 00:12:02.011 "model_number": "SPDK bdev Controller", 00:12:02.011 "max_namespaces": 32, 00:12:02.011 "min_cntlid": 1, 00:12:02.011 "max_cntlid": 65519, 00:12:02.011 "namespaces": [ 00:12:02.011 { 00:12:02.011 "nsid": 1, 00:12:02.011 "bdev_name": "Null1", 00:12:02.011 "name": "Null1", 00:12:02.011 "nguid": "D273891C7BB749CB8B31CBF5482FB89F", 00:12:02.011 "uuid": "d273891c-7bb7-49cb-8b31-cbf5482fb89f" 00:12:02.011 } 00:12:02.011 ] 00:12:02.011 }, 00:12:02.011 { 00:12:02.011 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:02.011 "subtype": "NVMe", 00:12:02.011 "listen_addresses": [ 00:12:02.011 { 00:12:02.011 "trtype": "TCP", 00:12:02.011 "adrfam": "IPv4", 00:12:02.011 "traddr": "10.0.0.2", 00:12:02.011 "trsvcid": "4420" 00:12:02.011 } 00:12:02.011 ], 00:12:02.011 "allow_any_host": true, 00:12:02.011 "hosts": [], 00:12:02.011 "serial_number": "SPDK00000000000002", 00:12:02.011 "model_number": "SPDK bdev Controller", 00:12:02.011 "max_namespaces": 32, 00:12:02.011 "min_cntlid": 1, 00:12:02.011 "max_cntlid": 65519, 00:12:02.011 "namespaces": [ 00:12:02.011 { 00:12:02.011 "nsid": 1, 00:12:02.011 "bdev_name": "Null2", 00:12:02.011 "name": "Null2", 00:12:02.011 "nguid": "FA687F52590742A79DC745F0FCA62FD8", 00:12:02.011 "uuid": "fa687f52-5907-42a7-9dc7-45f0fca62fd8" 00:12:02.011 } 00:12:02.011 ] 00:12:02.011 }, 00:12:02.011 { 00:12:02.011 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:02.011 "subtype": "NVMe", 00:12:02.011 "listen_addresses": [ 00:12:02.011 { 00:12:02.011 "trtype": "TCP", 00:12:02.011 "adrfam": "IPv4", 00:12:02.011 "traddr": "10.0.0.2", 00:12:02.011 "trsvcid": "4420" 00:12:02.011 } 00:12:02.011 ], 00:12:02.011 "allow_any_host": true, 00:12:02.011 "hosts": [], 00:12:02.011 "serial_number": "SPDK00000000000003", 00:12:02.011 "model_number": "SPDK bdev Controller", 00:12:02.011 "max_namespaces": 32, 00:12:02.011 "min_cntlid": 1, 00:12:02.011 "max_cntlid": 65519, 00:12:02.011 "namespaces": [ 00:12:02.011 { 00:12:02.011 "nsid": 1, 00:12:02.011 "bdev_name": "Null3", 00:12:02.011 "name": "Null3", 00:12:02.011 "nguid": "6C87154BACFF4517B04C260E82040B10", 00:12:02.011 "uuid": "6c87154b-acff-4517-b04c-260e82040b10" 00:12:02.011 } 00:12:02.011 ] 00:12:02.011 }, 00:12:02.011 { 00:12:02.011 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:02.011 "subtype": "NVMe", 00:12:02.011 "listen_addresses": [ 00:12:02.011 { 00:12:02.011 "trtype": "TCP", 00:12:02.011 "adrfam": "IPv4", 00:12:02.011 "traddr": "10.0.0.2", 00:12:02.011 "trsvcid": "4420" 00:12:02.011 } 00:12:02.011 ], 00:12:02.011 "allow_any_host": true, 00:12:02.011 "hosts": [], 00:12:02.011 "serial_number": "SPDK00000000000004", 00:12:02.011 "model_number": "SPDK bdev Controller", 00:12:02.011 "max_namespaces": 32, 00:12:02.011 "min_cntlid": 1, 00:12:02.011 "max_cntlid": 65519, 00:12:02.011 "namespaces": [ 00:12:02.011 { 00:12:02.011 "nsid": 1, 00:12:02.011 "bdev_name": "Null4", 00:12:02.011 "name": "Null4", 00:12:02.011 "nguid": "EDD106EB60194F7BBBEDF83194F87598", 00:12:02.011 "uuid": "edd106eb-6019-4f7b-bbed-f83194f87598" 00:12:02.011 } 00:12:02.011 ] 00:12:02.011 } 00:12:02.011 ] 00:12:02.011 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.011 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:02.011 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:02.011 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:02.011 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.011 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:02.011 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.011 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:02.011 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.011 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:02.011 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.011 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:02.011 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:02.011 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.011 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:02.011 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.011 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:02.011 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.011 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:02.011 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.011 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:02.011 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:02.011 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.011 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:02.011 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.011 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:02.011 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.011 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:02.011 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.011 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:02.011 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:02.011 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.012 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:02.012 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.012 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:02.012 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.012 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:02.012 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.012 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:02.012 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.012 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:02.012 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.012 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:02.012 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:02.012 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.012 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:02.012 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.012 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:02.012 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:02.012 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:02.012 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:02.012 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:02.012 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:12:02.012 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:02.012 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:12:02.012 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:02.012 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:02.012 rmmod nvme_tcp 00:12:02.012 rmmod nvme_fabrics 00:12:02.271 rmmod nvme_keyring 00:12:02.271 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:02.271 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:12:02.271 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:12:02.271 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2613000 ']' 00:12:02.271 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2613000 00:12:02.271 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 2613000 ']' 00:12:02.271 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 2613000 00:12:02.271 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:12:02.271 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:02.271 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2613000 00:12:02.271 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:02.271 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:02.271 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2613000' 00:12:02.271 killing process with pid 2613000 00:12:02.271 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 2613000 00:12:02.271 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 2613000 00:12:02.530 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:02.530 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:02.530 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:02.530 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:02.530 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:02.530 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.530 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:02.530 21:59:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.435 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:04.435 00:12:04.436 real 0m11.003s 00:12:04.436 user 0m8.032s 00:12:04.436 sys 0m5.791s 00:12:04.436 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:04.436 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.436 ************************************ 00:12:04.436 END TEST nvmf_target_discovery 00:12:04.436 ************************************ 00:12:04.436 21:59:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:04.436 21:59:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:04.436 21:59:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:04.436 21:59:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:04.694 ************************************ 00:12:04.694 START TEST nvmf_referrals 00:12:04.694 ************************************ 00:12:04.694 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:04.694 * Looking for test storage... 00:12:04.694 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:04.694 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:04.694 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:04.694 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:04.694 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:04.694 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:04.694 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:04.694 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:04.694 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:04.694 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:04.694 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:04.694 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:04.694 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:04.694 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:12:04.694 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:12:04.694 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:04.694 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:04.694 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:04.694 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:04.694 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:04.694 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:04.694 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:04.694 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:04.694 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.695 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.695 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.695 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:04.695 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.695 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:12:04.695 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:04.695 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:04.695 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:04.695 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:04.695 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:04.695 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:04.695 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:04.695 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:04.695 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:04.695 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:04.695 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:04.695 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:04.695 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:04.695 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:04.695 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:04.695 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:04.695 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:04.695 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:04.695 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:04.695 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:04.695 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.695 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:04.695 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.695 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:04.695 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:04.695 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:12:04.695 21:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:11.260 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:11.260 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:12:11.260 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:11.260 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:11.260 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:11.260 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:11.260 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:11.261 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:11.261 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:11.261 Found net devices under 0000:af:00.0: cvl_0_0 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:11.261 Found net devices under 0000:af:00.1: cvl_0_1 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:11.261 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:11.521 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:11.521 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:11.521 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:11.521 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:11.521 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:11.521 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:11.521 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:11.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:11.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:12:11.521 00:12:11.521 --- 10.0.0.2 ping statistics --- 00:12:11.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.521 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:12:11.521 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:11.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:11.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:12:11.521 00:12:11.521 --- 10.0.0.1 ping statistics --- 00:12:11.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.521 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:12:11.521 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:11.521 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:12:11.521 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:11.521 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:11.521 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:11.521 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:11.521 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:11.521 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:11.521 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:11.521 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:11.521 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:11.521 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:11.521 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:11.521 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2616933 00:12:11.521 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2616933 00:12:11.521 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:11.521 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 2616933 ']' 00:12:11.521 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.521 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:11.521 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.521 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:11.521 21:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:11.521 [2024-07-24 21:59:50.731682] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:12:11.521 [2024-07-24 21:59:50.731753] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:11.780 EAL: No free 2048 kB hugepages reported on node 1 00:12:11.780 [2024-07-24 21:59:50.805327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:11.780 [2024-07-24 21:59:50.882141] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:11.780 [2024-07-24 21:59:50.882179] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:11.780 [2024-07-24 21:59:50.882188] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:11.780 [2024-07-24 21:59:50.882200] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:11.780 [2024-07-24 21:59:50.882207] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:11.780 [2024-07-24 21:59:50.882266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:11.780 [2024-07-24 21:59:50.882361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:11.780 [2024-07-24 21:59:50.882381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:11.780 [2024-07-24 21:59:50.882383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.347 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:12.347 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:12:12.347 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:12.347 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:12.347 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.606 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:12.606 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:12.606 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.606 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.606 [2024-07-24 21:59:51.585070] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:12.606 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.606 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:12.606 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.606 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.606 [2024-07-24 21:59:51.601259] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:12.606 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.606 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:12.606 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.606 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.606 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.606 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:12.606 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.606 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.606 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.606 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:12.606 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.606 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.606 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.606 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:12.606 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.606 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:12.606 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.606 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.606 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:12.606 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:12.606 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:12.606 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:12.606 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.606 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:12.606 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.606 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:12.606 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.606 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:12.606 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:12.606 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:12.606 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:12.606 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:12.606 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:12.606 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:12.606 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:12.865 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:12.865 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:12.865 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:12.865 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.865 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.865 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.865 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:12.865 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.865 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.865 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.865 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:12.865 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.865 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.865 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.865 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:12.865 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:12.865 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.865 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.865 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.865 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:12.865 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:12.865 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:12.865 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:12.865 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:12.865 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:12.865 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:13.123 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:13.123 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:13.123 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:13.123 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.123 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:13.124 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.124 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:13.124 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.124 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:13.124 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.124 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:13.124 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:13.124 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:13.124 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:13.124 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.124 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:13.124 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:13.124 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.124 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:13.124 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:13.124 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:13.124 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:13.124 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:13.124 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:13.124 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:13.124 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:13.124 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:13.124 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:13.124 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:13.124 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:13.124 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:13.124 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:13.124 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:13.382 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:13.382 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:13.382 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:13.382 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:13.382 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:13.382 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:13.643 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:13.643 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:13.643 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.643 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:13.643 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.643 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:13.643 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:13.643 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:13.643 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:13.643 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.643 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:13.643 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:13.643 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.643 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:13.643 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:13.643 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:13.643 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:13.643 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:13.643 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:13.643 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:13.643 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:13.643 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:13.643 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:13.643 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:13.643 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:13.643 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:13.643 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:13.643 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:13.908 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:13.908 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:13.908 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:13.908 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:13.908 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:13.908 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:13.908 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:13.908 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:13.908 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.908 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:13.908 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.908 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:13.908 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:13.908 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.908 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:13.908 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.167 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:14.167 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:14.167 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:14.167 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:14.167 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:14.167 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:14.167 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:14.167 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:14.167 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:14.167 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:14.167 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:14.167 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:14.167 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:12:14.167 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:14.167 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:12:14.167 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:14.167 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:14.167 rmmod nvme_tcp 00:12:14.167 rmmod nvme_fabrics 00:12:14.167 rmmod nvme_keyring 00:12:14.167 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:14.167 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:12:14.167 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:12:14.167 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2616933 ']' 00:12:14.167 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2616933 00:12:14.167 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 2616933 ']' 00:12:14.167 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 2616933 00:12:14.167 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:12:14.167 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:14.167 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2616933 00:12:14.167 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:14.167 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:14.167 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2616933' 00:12:14.167 killing process with pid 2616933 00:12:14.167 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 2616933 00:12:14.167 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 2616933 00:12:14.425 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:14.425 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:14.425 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:14.425 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:14.425 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:14.425 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.425 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:14.425 21:59:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:16.958 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:16.958 00:12:16.958 real 0m11.914s 00:12:16.958 user 0m13.186s 00:12:16.958 sys 0m6.039s 00:12:16.958 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:16.958 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.958 ************************************ 00:12:16.958 END TEST nvmf_referrals 00:12:16.958 ************************************ 00:12:16.959 21:59:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:16.959 21:59:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:16.959 21:59:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:16.959 21:59:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:16.959 ************************************ 00:12:16.959 START TEST nvmf_connect_disconnect 00:12:16.959 ************************************ 00:12:16.959 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:16.959 * Looking for test storage... 00:12:16.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:16.959 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:16.959 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:16.959 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:16.959 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:16.959 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:16.959 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:16.959 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:16.959 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:16.959 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:16.959 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:16.959 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:16.959 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:16.959 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:12:16.959 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:12:16.959 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:16.959 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:16.959 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:16.959 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:16.959 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:16.959 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:16.959 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:16.959 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:16.959 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.959 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.959 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.959 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:16.959 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.959 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:12:16.959 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:16.959 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:16.959 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:16.959 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:16.959 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:16.959 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:16.959 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:16.959 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:16.959 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:16.959 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:16.959 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:16.960 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:16.960 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:16.960 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:16.960 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:16.960 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:16.960 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.960 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:16.960 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:16.960 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:16.960 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:16.960 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:12:16.960 21:59:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:23.521 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:23.521 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:12:23.521 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:23.521 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:23.521 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:23.521 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:23.521 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:23.521 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:12:23.521 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:23.521 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:12:23.521 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:12:23.521 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:12:23.521 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:12:23.521 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:12:23.521 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:12:23.521 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:23.521 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:23.521 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:23.521 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:23.521 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:23.521 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:23.521 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:23.521 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:23.521 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:23.521 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:23.521 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:23.521 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:23.521 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:23.521 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:23.521 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:23.521 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:23.521 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:23.521 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:23.521 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:23.521 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:23.521 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:23.521 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:23.521 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:23.521 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:23.521 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:23.521 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:23.521 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:23.521 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:23.521 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:23.521 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:23.521 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:23.522 Found net devices under 0000:af:00.0: cvl_0_0 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:23.522 Found net devices under 0000:af:00.1: cvl_0_1 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:23.522 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:23.522 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:12:23.522 00:12:23.522 --- 10.0.0.2 ping statistics --- 00:12:23.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.522 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:23.522 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:23.522 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:12:23.522 00:12:23.522 --- 10.0.0.1 ping statistics --- 00:12:23.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.522 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2621341 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2621341 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 2621341 ']' 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:23.522 22:00:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:23.522 [2024-07-24 22:00:02.699359] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:12:23.522 [2024-07-24 22:00:02.699411] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:23.780 EAL: No free 2048 kB hugepages reported on node 1 00:12:23.780 [2024-07-24 22:00:02.774095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:23.780 [2024-07-24 22:00:02.848372] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:23.780 [2024-07-24 22:00:02.848410] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:23.780 [2024-07-24 22:00:02.848422] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:23.780 [2024-07-24 22:00:02.848431] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:23.780 [2024-07-24 22:00:02.848438] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:23.780 [2024-07-24 22:00:02.848484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:23.780 [2024-07-24 22:00:02.848580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:23.780 [2024-07-24 22:00:02.848599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:23.780 [2024-07-24 22:00:02.848600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.344 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:24.344 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:12:24.344 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:24.344 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:24.344 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:24.344 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:24.344 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:24.344 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.344 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:24.602 [2024-07-24 22:00:03.559952] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:24.602 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.602 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:24.602 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.602 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:24.602 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.602 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:24.602 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:24.602 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.602 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:24.602 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.602 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:24.603 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.603 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:24.603 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.603 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:24.603 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.603 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:24.603 [2024-07-24 22:00:03.614432] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:24.603 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.603 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:24.603 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:24.603 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:27.879 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.060 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.630 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.909 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.909 22:00:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:41.909 22:00:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:41.909 22:00:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:41.909 22:00:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:41.909 22:00:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:41.909 22:00:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:41.909 22:00:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:41.909 22:00:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:41.909 rmmod nvme_tcp 00:12:41.909 rmmod nvme_fabrics 00:12:41.909 rmmod nvme_keyring 00:12:41.909 22:00:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:41.909 22:00:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:41.909 22:00:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:41.909 22:00:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2621341 ']' 00:12:41.909 22:00:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2621341 00:12:41.909 22:00:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 2621341 ']' 00:12:41.909 22:00:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 2621341 00:12:41.909 22:00:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:12:41.909 22:00:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:41.909 22:00:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2621341 00:12:41.909 22:00:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:41.909 22:00:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:41.909 22:00:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2621341' 00:12:41.909 killing process with pid 2621341 00:12:41.909 22:00:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 2621341 00:12:41.909 22:00:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 2621341 00:12:41.909 22:00:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:41.909 22:00:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:41.909 22:00:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:41.910 22:00:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:41.910 22:00:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:41.910 22:00:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.910 22:00:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:41.910 22:00:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:44.443 00:12:44.443 real 0m27.436s 00:12:44.443 user 1m13.558s 00:12:44.443 sys 0m7.187s 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:44.443 ************************************ 00:12:44.443 END TEST nvmf_connect_disconnect 00:12:44.443 ************************************ 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:44.443 ************************************ 00:12:44.443 START TEST nvmf_multitarget 00:12:44.443 ************************************ 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:44.443 * Looking for test storage... 00:12:44.443 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.443 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:44.444 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:44.444 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:12:44.444 22:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:51.008 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:51.008 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:51.009 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:51.009 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:51.009 Found net devices under 0000:af:00.0: cvl_0_0 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:51.009 Found net devices under 0000:af:00.1: cvl_0_1 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:51.009 22:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:51.009 22:00:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:51.009 22:00:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:51.009 22:00:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:51.009 22:00:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:51.268 22:00:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:51.268 22:00:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:51.268 22:00:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:51.268 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:51.268 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:12:51.268 00:12:51.268 --- 10.0.0.2 ping statistics --- 00:12:51.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.268 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:12:51.268 22:00:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:51.268 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:51.268 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:12:51.268 00:12:51.268 --- 10.0.0.1 ping statistics --- 00:12:51.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.268 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:12:51.268 22:00:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:51.268 22:00:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:51.268 22:00:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:51.268 22:00:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:51.268 22:00:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:51.268 22:00:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:51.268 22:00:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:51.268 22:00:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:51.268 22:00:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:51.268 22:00:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:51.268 22:00:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:51.268 22:00:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:51.268 22:00:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:51.268 22:00:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2628694 00:12:51.268 22:00:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:51.268 22:00:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2628694 00:12:51.268 22:00:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 2628694 ']' 00:12:51.268 22:00:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.268 22:00:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:51.268 22:00:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.268 22:00:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:51.268 22:00:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:51.268 [2024-07-24 22:00:30.358444] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:12:51.268 [2024-07-24 22:00:30.358490] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.268 EAL: No free 2048 kB hugepages reported on node 1 00:12:51.268 [2024-07-24 22:00:30.430817] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:51.525 [2024-07-24 22:00:30.506552] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:51.525 [2024-07-24 22:00:30.506591] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:51.525 [2024-07-24 22:00:30.506601] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:51.525 [2024-07-24 22:00:30.506609] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:51.525 [2024-07-24 22:00:30.506633] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:51.525 [2024-07-24 22:00:30.506676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:51.525 [2024-07-24 22:00:30.506774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:51.525 [2024-07-24 22:00:30.506797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:51.525 [2024-07-24 22:00:30.506798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.092 22:00:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:52.092 22:00:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:12:52.092 22:00:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:52.092 22:00:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:52.092 22:00:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:52.092 22:00:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:52.092 22:00:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:52.092 22:00:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:52.092 22:00:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:52.351 22:00:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:52.351 22:00:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:52.351 "nvmf_tgt_1" 00:12:52.351 22:00:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:52.351 "nvmf_tgt_2" 00:12:52.351 22:00:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:52.351 22:00:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:52.609 22:00:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:52.609 22:00:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:52.609 true 00:12:52.609 22:00:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:52.609 true 00:12:52.869 22:00:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:52.869 22:00:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:52.869 22:00:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:52.869 22:00:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:52.869 22:00:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:52.869 22:00:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:52.869 22:00:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:52.869 22:00:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:52.869 22:00:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:52.869 22:00:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:52.869 22:00:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:52.869 rmmod nvme_tcp 00:12:52.869 rmmod nvme_fabrics 00:12:52.869 rmmod nvme_keyring 00:12:52.869 22:00:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:52.869 22:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:52.869 22:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:52.869 22:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2628694 ']' 00:12:52.869 22:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2628694 00:12:52.869 22:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 2628694 ']' 00:12:52.869 22:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 2628694 00:12:52.869 22:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:12:52.869 22:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:52.869 22:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2628694 00:12:52.869 22:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:52.869 22:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:52.869 22:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2628694' 00:12:52.869 killing process with pid 2628694 00:12:52.869 22:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 2628694 00:12:52.869 22:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 2628694 00:12:53.129 22:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:53.129 22:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:53.129 22:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:53.129 22:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:53.129 22:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:53.129 22:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.129 22:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:53.129 22:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:55.663 00:12:55.663 real 0m11.124s 00:12:55.663 user 0m9.511s 00:12:55.663 sys 0m5.851s 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:55.663 ************************************ 00:12:55.663 END TEST nvmf_multitarget 00:12:55.663 ************************************ 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:55.663 ************************************ 00:12:55.663 START TEST nvmf_rpc 00:12:55.663 ************************************ 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:55.663 * Looking for test storage... 00:12:55.663 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:55.663 22:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.253 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:02.253 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:13:02.253 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:02.253 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:02.253 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:02.253 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:02.253 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:02.253 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:13:02.253 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:02.253 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:13:02.253 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:13:02.253 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:13:02.253 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:13:02.253 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:13:02.253 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:13:02.253 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:02.253 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:02.253 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:02.253 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:02.254 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:02.254 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:02.254 Found net devices under 0000:af:00.0: cvl_0_0 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:02.254 Found net devices under 0000:af:00.1: cvl_0_1 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:02.254 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:02.254 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:13:02.254 00:13:02.254 --- 10.0.0.2 ping statistics --- 00:13:02.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.254 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:02.254 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:02.254 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:13:02.254 00:13:02.254 --- 10.0.0.1 ping statistics --- 00:13:02.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.254 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2632668 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2632668 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 2632668 ']' 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.254 22:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:02.254 [2024-07-24 22:00:41.439644] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:13:02.254 [2024-07-24 22:00:41.439694] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.522 EAL: No free 2048 kB hugepages reported on node 1 00:13:02.522 [2024-07-24 22:00:41.514387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:02.522 [2024-07-24 22:00:41.588928] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:02.522 [2024-07-24 22:00:41.588966] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:02.522 [2024-07-24 22:00:41.588975] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:02.522 [2024-07-24 22:00:41.588984] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:02.522 [2024-07-24 22:00:41.588991] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:02.522 [2024-07-24 22:00:41.589037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.522 [2024-07-24 22:00:41.589136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:02.522 [2024-07-24 22:00:41.589197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:02.522 [2024-07-24 22:00:41.589199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.090 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:03.090 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:13:03.090 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:03.090 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:03.090 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.090 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:03.090 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:03.090 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.090 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.349 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.349 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:03.349 "tick_rate": 2500000000, 00:13:03.349 "poll_groups": [ 00:13:03.349 { 00:13:03.349 "name": "nvmf_tgt_poll_group_000", 00:13:03.349 "admin_qpairs": 0, 00:13:03.349 "io_qpairs": 0, 00:13:03.349 "current_admin_qpairs": 0, 00:13:03.349 "current_io_qpairs": 0, 00:13:03.349 "pending_bdev_io": 0, 00:13:03.349 "completed_nvme_io": 0, 00:13:03.349 "transports": [] 00:13:03.349 }, 00:13:03.349 { 00:13:03.349 "name": "nvmf_tgt_poll_group_001", 00:13:03.349 "admin_qpairs": 0, 00:13:03.349 "io_qpairs": 0, 00:13:03.349 "current_admin_qpairs": 0, 00:13:03.349 "current_io_qpairs": 0, 00:13:03.349 "pending_bdev_io": 0, 00:13:03.349 "completed_nvme_io": 0, 00:13:03.349 "transports": [] 00:13:03.349 }, 00:13:03.349 { 00:13:03.349 "name": "nvmf_tgt_poll_group_002", 00:13:03.349 "admin_qpairs": 0, 00:13:03.349 "io_qpairs": 0, 00:13:03.349 "current_admin_qpairs": 0, 00:13:03.349 "current_io_qpairs": 0, 00:13:03.349 "pending_bdev_io": 0, 00:13:03.349 "completed_nvme_io": 0, 00:13:03.349 "transports": [] 00:13:03.349 }, 00:13:03.349 { 00:13:03.349 "name": "nvmf_tgt_poll_group_003", 00:13:03.349 "admin_qpairs": 0, 00:13:03.349 "io_qpairs": 0, 00:13:03.349 "current_admin_qpairs": 0, 00:13:03.349 "current_io_qpairs": 0, 00:13:03.349 "pending_bdev_io": 0, 00:13:03.349 "completed_nvme_io": 0, 00:13:03.349 "transports": [] 00:13:03.349 } 00:13:03.349 ] 00:13:03.349 }' 00:13:03.349 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:03.349 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:03.349 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:03.349 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:03.349 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:03.349 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:03.349 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:03.349 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:03.349 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.349 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.349 [2024-07-24 22:00:42.405421] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:03.349 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.349 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:03.349 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.350 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.350 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.350 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:03.350 "tick_rate": 2500000000, 00:13:03.350 "poll_groups": [ 00:13:03.350 { 00:13:03.350 "name": "nvmf_tgt_poll_group_000", 00:13:03.350 "admin_qpairs": 0, 00:13:03.350 "io_qpairs": 0, 00:13:03.350 "current_admin_qpairs": 0, 00:13:03.350 "current_io_qpairs": 0, 00:13:03.350 "pending_bdev_io": 0, 00:13:03.350 "completed_nvme_io": 0, 00:13:03.350 "transports": [ 00:13:03.350 { 00:13:03.350 "trtype": "TCP" 00:13:03.350 } 00:13:03.350 ] 00:13:03.350 }, 00:13:03.350 { 00:13:03.350 "name": "nvmf_tgt_poll_group_001", 00:13:03.350 "admin_qpairs": 0, 00:13:03.350 "io_qpairs": 0, 00:13:03.350 "current_admin_qpairs": 0, 00:13:03.350 "current_io_qpairs": 0, 00:13:03.350 "pending_bdev_io": 0, 00:13:03.350 "completed_nvme_io": 0, 00:13:03.350 "transports": [ 00:13:03.350 { 00:13:03.350 "trtype": "TCP" 00:13:03.350 } 00:13:03.350 ] 00:13:03.350 }, 00:13:03.350 { 00:13:03.350 "name": "nvmf_tgt_poll_group_002", 00:13:03.350 "admin_qpairs": 0, 00:13:03.350 "io_qpairs": 0, 00:13:03.350 "current_admin_qpairs": 0, 00:13:03.350 "current_io_qpairs": 0, 00:13:03.350 "pending_bdev_io": 0, 00:13:03.350 "completed_nvme_io": 0, 00:13:03.350 "transports": [ 00:13:03.350 { 00:13:03.350 "trtype": "TCP" 00:13:03.350 } 00:13:03.350 ] 00:13:03.350 }, 00:13:03.350 { 00:13:03.350 "name": "nvmf_tgt_poll_group_003", 00:13:03.350 "admin_qpairs": 0, 00:13:03.350 "io_qpairs": 0, 00:13:03.350 "current_admin_qpairs": 0, 00:13:03.350 "current_io_qpairs": 0, 00:13:03.350 "pending_bdev_io": 0, 00:13:03.350 "completed_nvme_io": 0, 00:13:03.350 "transports": [ 00:13:03.350 { 00:13:03.350 "trtype": "TCP" 00:13:03.350 } 00:13:03.350 ] 00:13:03.350 } 00:13:03.350 ] 00:13:03.350 }' 00:13:03.350 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:03.350 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:03.350 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:03.350 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:03.350 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:03.350 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:03.350 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:03.350 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:03.350 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:03.350 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:03.350 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:03.350 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:03.350 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:03.350 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:03.350 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.350 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.350 Malloc1 00:13:03.350 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.350 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:03.350 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.350 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.609 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.609 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:03.609 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.609 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.609 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.609 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:03.609 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.609 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.609 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.609 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.609 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.609 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.609 [2024-07-24 22:00:42.588354] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.609 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.609 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:13:03.609 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:13:03.609 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:13:03.609 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:13:03.609 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:03.609 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:13:03.609 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:03.609 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:13:03.609 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:03.609 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:13:03.609 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:13:03.609 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:13:03.609 [2024-07-24 22:00:42.623028] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e' 00:13:03.609 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:03.609 could not add new controller: failed to write to nvme-fabrics device 00:13:03.609 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:13:03.609 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:03.609 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:03.609 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:03.609 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:13:03.609 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.609 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.609 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.609 22:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:04.983 22:00:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:04.983 22:00:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:04.983 22:00:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:04.983 22:00:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:04.983 22:00:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:06.884 22:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:06.884 22:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:06.884 22:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:06.884 22:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:06.884 22:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:06.884 22:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:06.884 22:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:07.144 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.144 22:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:07.144 22:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:07.144 22:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:07.144 22:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.144 22:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:07.144 22:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.144 22:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:07.144 22:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:13:07.144 22:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.144 22:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.144 22:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.144 22:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:07.144 22:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:13:07.144 22:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:07.144 22:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:13:07.144 22:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:07.144 22:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:13:07.144 22:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:07.144 22:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:13:07.144 22:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:07.144 22:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:13:07.144 22:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:13:07.144 22:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:07.144 [2024-07-24 22:00:46.186830] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e' 00:13:07.144 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:07.144 could not add new controller: failed to write to nvme-fabrics device 00:13:07.144 22:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:13:07.144 22:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:07.144 22:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:07.144 22:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:07.144 22:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:07.144 22:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.144 22:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.144 22:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.144 22:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:08.520 22:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:08.520 22:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:08.520 22:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:08.520 22:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:08.520 22:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:10.423 22:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:10.423 22:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:10.423 22:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:10.423 22:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:10.423 22:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:10.423 22:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:10.423 22:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:10.423 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.423 22:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:10.423 22:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:10.423 22:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:10.423 22:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:10.423 22:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:10.423 22:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:10.681 22:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:10.681 22:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:10.681 22:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.681 22:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.681 22:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.681 22:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:10.681 22:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:10.681 22:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:10.681 22:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.681 22:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.681 22:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.681 22:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:10.681 22:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.681 22:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.681 [2024-07-24 22:00:49.687633] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:10.681 22:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.681 22:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:10.681 22:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.681 22:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.681 22:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.681 22:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:10.681 22:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.681 22:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.681 22:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.681 22:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:12.081 22:00:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:12.081 22:00:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:12.081 22:00:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:12.081 22:00:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:12.081 22:00:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:13.985 22:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:13.985 22:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:13.985 22:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:13.985 22:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:13.985 22:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:13.985 22:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:13.985 22:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:13.985 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.985 22:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:13.985 22:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:13.985 22:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:13.985 22:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:13.985 22:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:13.985 22:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:14.244 22:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:14.244 22:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:14.244 22:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.244 22:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.244 22:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.244 22:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:14.244 22:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.244 22:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.244 22:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.244 22:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:14.244 22:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:14.244 22:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.244 22:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.244 22:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.244 22:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:14.244 22:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.244 22:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.244 [2024-07-24 22:00:53.234115] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:14.244 22:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.244 22:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:14.244 22:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.244 22:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.244 22:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.244 22:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:14.244 22:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.244 22:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.244 22:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.245 22:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:15.647 22:00:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:15.647 22:00:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:15.647 22:00:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:15.647 22:00:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:15.647 22:00:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:17.584 22:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:17.584 22:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:17.584 22:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:17.584 22:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:17.584 22:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:17.584 22:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:17.584 22:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:17.584 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.584 22:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:17.584 22:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:17.584 22:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:17.584 22:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:17.843 22:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:17.843 22:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:17.843 22:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:17.843 22:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:17.843 22:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.843 22:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.843 22:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.843 22:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:17.843 22:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.843 22:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.843 22:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.843 22:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:17.844 22:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:17.844 22:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.844 22:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.844 22:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.844 22:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:17.844 22:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.844 22:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.844 [2024-07-24 22:00:56.853995] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:17.844 22:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.844 22:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:17.844 22:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.844 22:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.844 22:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.844 22:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:17.844 22:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.844 22:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.844 22:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.844 22:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:19.220 22:00:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:19.220 22:00:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:19.220 22:00:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:19.220 22:00:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:19.220 22:00:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:21.124 22:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:21.124 22:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:21.124 22:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:21.124 22:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:21.124 22:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:21.124 22:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:21.124 22:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:21.124 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.124 22:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:21.124 22:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:21.124 22:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:21.124 22:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:21.124 22:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:21.383 22:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:21.383 22:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:21.383 22:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:21.383 22:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.383 22:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.383 22:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.383 22:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:21.383 22:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.383 22:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.383 22:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.383 22:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:21.383 22:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:21.383 22:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.383 22:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.383 22:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.383 22:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:21.383 22:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.383 22:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.383 [2024-07-24 22:01:00.377143] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:21.383 22:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.383 22:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:21.383 22:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.383 22:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.383 22:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.383 22:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:21.383 22:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.383 22:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.383 22:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.384 22:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:22.775 22:01:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:22.775 22:01:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:22.775 22:01:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:22.775 22:01:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:22.776 22:01:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:24.681 22:01:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:24.681 22:01:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:24.681 22:01:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:24.681 22:01:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:24.681 22:01:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:24.681 22:01:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:24.681 22:01:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:24.681 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.681 22:01:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:24.681 22:01:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:24.681 22:01:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:24.681 22:01:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:24.681 22:01:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:24.681 22:01:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:24.681 22:01:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:24.681 22:01:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:24.681 22:01:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.681 22:01:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.681 22:01:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.681 22:01:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:24.681 22:01:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.681 22:01:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.681 22:01:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.681 22:01:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:24.681 22:01:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:24.681 22:01:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.681 22:01:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.681 22:01:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.681 22:01:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:24.681 22:01:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.681 22:01:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.681 [2024-07-24 22:01:03.866932] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:24.681 22:01:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.681 22:01:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:24.681 22:01:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.681 22:01:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.681 22:01:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.681 22:01:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:24.681 22:01:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.681 22:01:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.682 22:01:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.682 22:01:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:26.055 22:01:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:26.055 22:01:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:26.055 22:01:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:26.055 22:01:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:26.055 22:01:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:28.598 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:28.599 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.599 [2024-07-24 22:01:07.404690] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.599 [2024-07-24 22:01:07.452818] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.599 [2024-07-24 22:01:07.504976] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.599 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.600 [2024-07-24 22:01:07.553112] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.600 [2024-07-24 22:01:07.601263] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:28.600 "tick_rate": 2500000000, 00:13:28.600 "poll_groups": [ 00:13:28.600 { 00:13:28.600 "name": "nvmf_tgt_poll_group_000", 00:13:28.600 "admin_qpairs": 2, 00:13:28.600 "io_qpairs": 196, 00:13:28.600 "current_admin_qpairs": 0, 00:13:28.600 "current_io_qpairs": 0, 00:13:28.600 "pending_bdev_io": 0, 00:13:28.600 "completed_nvme_io": 297, 00:13:28.600 "transports": [ 00:13:28.600 { 00:13:28.600 "trtype": "TCP" 00:13:28.600 } 00:13:28.600 ] 00:13:28.600 }, 00:13:28.600 { 00:13:28.600 "name": "nvmf_tgt_poll_group_001", 00:13:28.600 "admin_qpairs": 2, 00:13:28.600 "io_qpairs": 196, 00:13:28.600 "current_admin_qpairs": 0, 00:13:28.600 "current_io_qpairs": 0, 00:13:28.600 "pending_bdev_io": 0, 00:13:28.600 "completed_nvme_io": 252, 00:13:28.600 "transports": [ 00:13:28.600 { 00:13:28.600 "trtype": "TCP" 00:13:28.600 } 00:13:28.600 ] 00:13:28.600 }, 00:13:28.600 { 00:13:28.600 "name": "nvmf_tgt_poll_group_002", 00:13:28.600 "admin_qpairs": 1, 00:13:28.600 "io_qpairs": 196, 00:13:28.600 "current_admin_qpairs": 0, 00:13:28.600 "current_io_qpairs": 0, 00:13:28.600 "pending_bdev_io": 0, 00:13:28.600 "completed_nvme_io": 291, 00:13:28.600 "transports": [ 00:13:28.600 { 00:13:28.600 "trtype": "TCP" 00:13:28.600 } 00:13:28.600 ] 00:13:28.600 }, 00:13:28.600 { 00:13:28.600 "name": "nvmf_tgt_poll_group_003", 00:13:28.600 "admin_qpairs": 2, 00:13:28.600 "io_qpairs": 196, 00:13:28.600 "current_admin_qpairs": 0, 00:13:28.600 "current_io_qpairs": 0, 00:13:28.600 "pending_bdev_io": 0, 00:13:28.600 "completed_nvme_io": 294, 00:13:28.600 "transports": [ 00:13:28.600 { 00:13:28.600 "trtype": "TCP" 00:13:28.600 } 00:13:28.600 ] 00:13:28.600 } 00:13:28.600 ] 00:13:28.600 }' 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:28.600 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 784 > 0 )) 00:13:28.601 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:28.601 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:28.601 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:28.601 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:28.601 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:13:28.601 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:28.601 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:13:28.601 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:28.601 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:28.601 rmmod nvme_tcp 00:13:28.601 rmmod nvme_fabrics 00:13:28.601 rmmod nvme_keyring 00:13:28.860 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:28.860 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:13:28.860 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:13:28.860 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2632668 ']' 00:13:28.860 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2632668 00:13:28.860 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 2632668 ']' 00:13:28.860 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 2632668 00:13:28.860 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:13:28.860 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:28.860 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2632668 00:13:28.860 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:28.860 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:28.860 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2632668' 00:13:28.860 killing process with pid 2632668 00:13:28.860 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 2632668 00:13:28.860 22:01:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 2632668 00:13:29.120 22:01:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:29.120 22:01:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:29.120 22:01:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:29.120 22:01:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:29.120 22:01:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:29.120 22:01:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:29.120 22:01:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:29.120 22:01:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:31.056 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:31.056 00:13:31.056 real 0m35.778s 00:13:31.056 user 1m46.654s 00:13:31.056 sys 0m8.250s 00:13:31.056 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:31.056 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.056 ************************************ 00:13:31.056 END TEST nvmf_rpc 00:13:31.056 ************************************ 00:13:31.056 22:01:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:31.056 22:01:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:31.056 22:01:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:31.056 22:01:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:31.321 ************************************ 00:13:31.321 START TEST nvmf_invalid 00:13:31.321 ************************************ 00:13:31.321 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:31.321 * Looking for test storage... 00:13:31.321 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:31.321 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:31.321 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:31.321 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:31.321 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:31.321 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:31.321 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:31.321 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:31.321 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:31.321 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:31.321 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:31.321 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:31.321 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:31.321 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:13:31.321 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:13:31.321 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:31.321 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:31.322 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:31.322 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:31.322 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:31.322 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:31.322 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:31.322 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:31.322 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.322 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.322 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.322 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:31.322 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.322 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:13:31.322 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:31.322 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:31.322 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:31.322 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:31.322 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:31.322 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:31.322 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:31.322 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:31.322 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:31.322 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:31.322 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:31.322 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:31.322 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:31.322 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:31.322 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:31.322 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:31.322 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:31.322 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:31.322 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:31.322 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:31.322 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:31.322 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:31.322 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:31.322 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:31.322 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:13:31.322 22:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:37.891 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:37.891 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:13:37.891 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:37.891 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:37.891 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:37.891 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:37.891 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:37.891 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:13:37.891 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:37.891 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:13:37.891 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:13:37.891 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:13:37.891 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:13:37.891 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:13:37.891 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:13:37.891 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:37.891 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:37.891 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:37.891 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:37.891 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:37.891 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:37.891 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:37.891 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:37.891 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:37.891 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:37.891 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:37.891 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:37.891 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:37.891 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:37.891 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:37.891 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:37.891 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:37.891 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:37.891 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:37.891 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:37.891 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:37.891 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:37.891 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:37.891 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:37.891 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:38.151 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:38.151 Found net devices under 0000:af:00.0: cvl_0_0 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:38.151 Found net devices under 0000:af:00.1: cvl_0_1 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:38.151 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:38.410 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:38.410 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:38.411 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:38.411 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:38.411 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:13:38.411 00:13:38.411 --- 10.0.0.2 ping statistics --- 00:13:38.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:38.411 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:13:38.411 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:38.411 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:38.411 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:13:38.411 00:13:38.411 --- 10.0.0.1 ping statistics --- 00:13:38.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:38.411 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:13:38.411 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:38.411 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:13:38.411 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:38.411 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:38.411 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:38.411 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:38.411 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:38.411 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:38.411 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:38.411 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:38.411 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:38.411 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:38.411 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:38.411 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2640843 00:13:38.411 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2640843 00:13:38.411 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 2640843 ']' 00:13:38.411 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.411 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:38.411 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.411 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:38.411 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:38.411 22:01:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:38.411 [2024-07-24 22:01:17.502087] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:13:38.411 [2024-07-24 22:01:17.502138] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:38.411 EAL: No free 2048 kB hugepages reported on node 1 00:13:38.411 [2024-07-24 22:01:17.577081] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:38.670 [2024-07-24 22:01:17.651957] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:38.670 [2024-07-24 22:01:17.651994] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:38.670 [2024-07-24 22:01:17.652004] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:38.670 [2024-07-24 22:01:17.652012] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:38.670 [2024-07-24 22:01:17.652019] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:38.670 [2024-07-24 22:01:17.652066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:38.670 [2024-07-24 22:01:17.652087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:38.670 [2024-07-24 22:01:17.652105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:38.670 [2024-07-24 22:01:17.652106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.236 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:39.236 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:13:39.236 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:39.236 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:39.236 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:39.236 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:39.236 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:39.236 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode716 00:13:39.494 [2024-07-24 22:01:18.507476] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:39.494 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:39.494 { 00:13:39.494 "nqn": "nqn.2016-06.io.spdk:cnode716", 00:13:39.494 "tgt_name": "foobar", 00:13:39.494 "method": "nvmf_create_subsystem", 00:13:39.494 "req_id": 1 00:13:39.494 } 00:13:39.494 Got JSON-RPC error response 00:13:39.494 response: 00:13:39.494 { 00:13:39.494 "code": -32603, 00:13:39.494 "message": "Unable to find target foobar" 00:13:39.494 }' 00:13:39.494 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:39.494 { 00:13:39.494 "nqn": "nqn.2016-06.io.spdk:cnode716", 00:13:39.494 "tgt_name": "foobar", 00:13:39.494 "method": "nvmf_create_subsystem", 00:13:39.494 "req_id": 1 00:13:39.494 } 00:13:39.495 Got JSON-RPC error response 00:13:39.495 response: 00:13:39.495 { 00:13:39.495 "code": -32603, 00:13:39.495 "message": "Unable to find target foobar" 00:13:39.495 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:39.495 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:39.495 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode9925 00:13:39.495 [2024-07-24 22:01:18.700169] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9925: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:39.753 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:39.753 { 00:13:39.753 "nqn": "nqn.2016-06.io.spdk:cnode9925", 00:13:39.753 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:39.753 "method": "nvmf_create_subsystem", 00:13:39.753 "req_id": 1 00:13:39.753 } 00:13:39.753 Got JSON-RPC error response 00:13:39.753 response: 00:13:39.753 { 00:13:39.753 "code": -32602, 00:13:39.753 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:39.753 }' 00:13:39.753 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:39.753 { 00:13:39.753 "nqn": "nqn.2016-06.io.spdk:cnode9925", 00:13:39.753 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:39.753 "method": "nvmf_create_subsystem", 00:13:39.753 "req_id": 1 00:13:39.753 } 00:13:39.753 Got JSON-RPC error response 00:13:39.753 response: 00:13:39.753 { 00:13:39.753 "code": -32602, 00:13:39.753 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:39.753 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:39.753 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:39.753 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode31774 00:13:39.753 [2024-07-24 22:01:18.888723] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31774: invalid model number 'SPDK_Controller' 00:13:39.753 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:39.753 { 00:13:39.753 "nqn": "nqn.2016-06.io.spdk:cnode31774", 00:13:39.753 "model_number": "SPDK_Controller\u001f", 00:13:39.753 "method": "nvmf_create_subsystem", 00:13:39.753 "req_id": 1 00:13:39.753 } 00:13:39.753 Got JSON-RPC error response 00:13:39.753 response: 00:13:39.753 { 00:13:39.753 "code": -32602, 00:13:39.753 "message": "Invalid MN SPDK_Controller\u001f" 00:13:39.753 }' 00:13:39.753 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:39.753 { 00:13:39.753 "nqn": "nqn.2016-06.io.spdk:cnode31774", 00:13:39.753 "model_number": "SPDK_Controller\u001f", 00:13:39.753 "method": "nvmf_create_subsystem", 00:13:39.753 "req_id": 1 00:13:39.753 } 00:13:39.753 Got JSON-RPC error response 00:13:39.753 response: 00:13:39.753 { 00:13:39.753 "code": -32602, 00:13:39.753 "message": "Invalid MN SPDK_Controller\u001f" 00:13:39.753 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:39.753 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:39.753 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:39.753 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:39.753 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:39.753 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:39.753 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:39.753 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.753 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:39.753 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:39.753 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:39.753 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.753 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.753 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:39.753 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:39.753 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:39.753 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.753 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.753 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:39.753 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:39.753 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:39.753 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.753 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.753 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:39.753 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:39.753 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:39.753 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.753 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.753 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:39.753 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:39.753 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:39.753 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.753 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.012 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:13:40.012 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:40.012 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:13:40.012 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.012 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.012 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:40.012 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:40.012 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:40.012 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.012 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.012 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:40.012 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:40.012 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:40.012 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.012 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.012 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:40.012 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:40.012 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:40.012 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.012 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.012 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:40.012 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:40.012 22:01:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:40.012 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.012 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.012 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:40.012 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:40.012 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:40.012 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.012 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.012 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:40.012 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:40.012 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:40.012 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.012 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.012 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:40.012 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:40.012 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:40.012 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.012 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.012 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:40.012 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:40.012 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:40.012 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.012 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.012 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:40.012 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:40.012 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:40.012 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.012 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.012 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:40.012 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:40.012 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:40.012 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.012 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.012 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:40.012 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:40.012 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:40.012 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.012 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.013 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:40.013 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:40.013 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:40.013 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.013 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.013 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:40.013 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:40.013 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:40.013 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.013 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.013 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:40.013 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:40.013 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:40.013 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.013 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.013 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:40.013 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:40.013 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:40.013 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.013 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.013 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ > == \- ]] 00:13:40.013 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '>O`P~R2} 35F+:z;}zij\' 00:13:40.013 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '>O`P~R2} 35F+:z;}zij\' nqn.2016-06.io.spdk:cnode25953 00:13:40.272 [2024-07-24 22:01:19.249959] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25953: invalid serial number '>O`P~R2} 35F+:z;}zij\' 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:40.272 { 00:13:40.272 "nqn": "nqn.2016-06.io.spdk:cnode25953", 00:13:40.272 "serial_number": ">O`P~R2} 35F+:z;}zij\\", 00:13:40.272 "method": "nvmf_create_subsystem", 00:13:40.272 "req_id": 1 00:13:40.272 } 00:13:40.272 Got JSON-RPC error response 00:13:40.272 response: 00:13:40.272 { 00:13:40.272 "code": -32602, 00:13:40.272 "message": "Invalid SN >O`P~R2} 35F+:z;}zij\\" 00:13:40.272 }' 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:40.272 { 00:13:40.272 "nqn": "nqn.2016-06.io.spdk:cnode25953", 00:13:40.272 "serial_number": ">O`P~R2} 35F+:z;}zij\\", 00:13:40.272 "method": "nvmf_create_subsystem", 00:13:40.272 "req_id": 1 00:13:40.272 } 00:13:40.272 Got JSON-RPC error response 00:13:40.272 response: 00:13:40.272 { 00:13:40.272 "code": -32602, 00:13:40.272 "message": "Invalid SN >O`P~R2} 35F+:z;}zij\\" 00:13:40.272 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:40.272 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:40.273 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:40.273 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.273 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.273 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:40.273 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:40.273 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:40.273 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.273 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.273 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:40.273 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:40.273 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:40.273 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.273 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.273 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:40.273 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:40.273 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:40.273 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.273 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.273 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:40.273 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:40.273 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:40.273 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.273 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.273 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:40.273 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:40.273 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:40.273 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.273 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.273 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:40.273 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:40.273 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:40.273 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.273 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.273 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:40.273 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:40.273 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:40.273 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.273 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.273 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:40.273 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:40.273 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:40.273 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.273 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.273 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:40.273 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:40.273 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:40.273 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.273 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.531 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:40.531 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:40.531 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ * == \- ]] 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '*!g0tCh(%rv9L{SW52K(H4dGiV_OikDMbuy3dyP0' 00:13:40.532 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '*!g0tCh(%rv9L{SW52K(H4dGiV_OikDMbuy3dyP0' nqn.2016-06.io.spdk:cnode5022 00:13:40.791 [2024-07-24 22:01:19.755611] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5022: invalid model number '*!g0tCh(%rv9L{SW52K(H4dGiV_OikDMbuy3dyP0' 00:13:40.791 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:40.791 { 00:13:40.791 "nqn": "nqn.2016-06.io.spdk:cnode5022", 00:13:40.791 "model_number": "*!g0tCh(%rv9L{SW52K(H4dGiV_OikDMbu\u007fy3dyP0", 00:13:40.791 "method": "nvmf_create_subsystem", 00:13:40.791 "req_id": 1 00:13:40.791 } 00:13:40.791 Got JSON-RPC error response 00:13:40.791 response: 00:13:40.791 { 00:13:40.791 "code": -32602, 00:13:40.791 "message": "Invalid MN *!g0tCh(%rv9L{SW52K(H4dGiV_OikDMbu\u007fy3dyP0" 00:13:40.791 }' 00:13:40.791 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:40.791 { 00:13:40.791 "nqn": "nqn.2016-06.io.spdk:cnode5022", 00:13:40.791 "model_number": "*!g0tCh(%rv9L{SW52K(H4dGiV_OikDMbu\u007fy3dyP0", 00:13:40.791 "method": "nvmf_create_subsystem", 00:13:40.791 "req_id": 1 00:13:40.791 } 00:13:40.791 Got JSON-RPC error response 00:13:40.791 response: 00:13:40.791 { 00:13:40.791 "code": -32602, 00:13:40.791 "message": "Invalid MN *!g0tCh(%rv9L{SW52K(H4dGiV_OikDMbu\u007fy3dyP0" 00:13:40.791 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:40.791 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:40.791 [2024-07-24 22:01:19.936288] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:40.791 22:01:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:41.049 22:01:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:41.049 22:01:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:41.049 22:01:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:41.049 22:01:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:41.049 22:01:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:41.307 [2024-07-24 22:01:20.313506] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:41.307 22:01:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:41.307 { 00:13:41.307 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:41.307 "listen_address": { 00:13:41.307 "trtype": "tcp", 00:13:41.307 "traddr": "", 00:13:41.307 "trsvcid": "4421" 00:13:41.307 }, 00:13:41.307 "method": "nvmf_subsystem_remove_listener", 00:13:41.307 "req_id": 1 00:13:41.307 } 00:13:41.307 Got JSON-RPC error response 00:13:41.307 response: 00:13:41.307 { 00:13:41.307 "code": -32602, 00:13:41.307 "message": "Invalid parameters" 00:13:41.307 }' 00:13:41.307 22:01:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:41.307 { 00:13:41.307 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:41.307 "listen_address": { 00:13:41.307 "trtype": "tcp", 00:13:41.307 "traddr": "", 00:13:41.308 "trsvcid": "4421" 00:13:41.308 }, 00:13:41.308 "method": "nvmf_subsystem_remove_listener", 00:13:41.308 "req_id": 1 00:13:41.308 } 00:13:41.308 Got JSON-RPC error response 00:13:41.308 response: 00:13:41.308 { 00:13:41.308 "code": -32602, 00:13:41.308 "message": "Invalid parameters" 00:13:41.308 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:41.308 22:01:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5230 -i 0 00:13:41.308 [2024-07-24 22:01:20.506106] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5230: invalid cntlid range [0-65519] 00:13:41.566 22:01:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:41.566 { 00:13:41.566 "nqn": "nqn.2016-06.io.spdk:cnode5230", 00:13:41.566 "min_cntlid": 0, 00:13:41.566 "method": "nvmf_create_subsystem", 00:13:41.566 "req_id": 1 00:13:41.566 } 00:13:41.566 Got JSON-RPC error response 00:13:41.566 response: 00:13:41.566 { 00:13:41.566 "code": -32602, 00:13:41.566 "message": "Invalid cntlid range [0-65519]" 00:13:41.566 }' 00:13:41.566 22:01:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:41.566 { 00:13:41.566 "nqn": "nqn.2016-06.io.spdk:cnode5230", 00:13:41.566 "min_cntlid": 0, 00:13:41.566 "method": "nvmf_create_subsystem", 00:13:41.566 "req_id": 1 00:13:41.566 } 00:13:41.566 Got JSON-RPC error response 00:13:41.566 response: 00:13:41.566 { 00:13:41.566 "code": -32602, 00:13:41.566 "message": "Invalid cntlid range [0-65519]" 00:13:41.566 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:41.566 22:01:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7511 -i 65520 00:13:41.566 [2024-07-24 22:01:20.698806] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7511: invalid cntlid range [65520-65519] 00:13:41.566 22:01:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:41.566 { 00:13:41.566 "nqn": "nqn.2016-06.io.spdk:cnode7511", 00:13:41.566 "min_cntlid": 65520, 00:13:41.566 "method": "nvmf_create_subsystem", 00:13:41.566 "req_id": 1 00:13:41.566 } 00:13:41.566 Got JSON-RPC error response 00:13:41.566 response: 00:13:41.566 { 00:13:41.566 "code": -32602, 00:13:41.566 "message": "Invalid cntlid range [65520-65519]" 00:13:41.566 }' 00:13:41.566 22:01:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:41.566 { 00:13:41.566 "nqn": "nqn.2016-06.io.spdk:cnode7511", 00:13:41.566 "min_cntlid": 65520, 00:13:41.566 "method": "nvmf_create_subsystem", 00:13:41.566 "req_id": 1 00:13:41.566 } 00:13:41.566 Got JSON-RPC error response 00:13:41.566 response: 00:13:41.566 { 00:13:41.566 "code": -32602, 00:13:41.566 "message": "Invalid cntlid range [65520-65519]" 00:13:41.566 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:41.566 22:01:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21630 -I 0 00:13:41.824 [2024-07-24 22:01:20.891382] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21630: invalid cntlid range [1-0] 00:13:41.824 22:01:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:41.824 { 00:13:41.824 "nqn": "nqn.2016-06.io.spdk:cnode21630", 00:13:41.824 "max_cntlid": 0, 00:13:41.824 "method": "nvmf_create_subsystem", 00:13:41.824 "req_id": 1 00:13:41.824 } 00:13:41.824 Got JSON-RPC error response 00:13:41.824 response: 00:13:41.824 { 00:13:41.824 "code": -32602, 00:13:41.824 "message": "Invalid cntlid range [1-0]" 00:13:41.824 }' 00:13:41.824 22:01:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:41.824 { 00:13:41.824 "nqn": "nqn.2016-06.io.spdk:cnode21630", 00:13:41.824 "max_cntlid": 0, 00:13:41.824 "method": "nvmf_create_subsystem", 00:13:41.824 "req_id": 1 00:13:41.824 } 00:13:41.824 Got JSON-RPC error response 00:13:41.824 response: 00:13:41.824 { 00:13:41.824 "code": -32602, 00:13:41.824 "message": "Invalid cntlid range [1-0]" 00:13:41.824 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:41.824 22:01:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24858 -I 65520 00:13:42.082 [2024-07-24 22:01:21.075983] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24858: invalid cntlid range [1-65520] 00:13:42.082 22:01:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:42.082 { 00:13:42.082 "nqn": "nqn.2016-06.io.spdk:cnode24858", 00:13:42.082 "max_cntlid": 65520, 00:13:42.082 "method": "nvmf_create_subsystem", 00:13:42.082 "req_id": 1 00:13:42.082 } 00:13:42.082 Got JSON-RPC error response 00:13:42.083 response: 00:13:42.083 { 00:13:42.083 "code": -32602, 00:13:42.083 "message": "Invalid cntlid range [1-65520]" 00:13:42.083 }' 00:13:42.083 22:01:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:42.083 { 00:13:42.083 "nqn": "nqn.2016-06.io.spdk:cnode24858", 00:13:42.083 "max_cntlid": 65520, 00:13:42.083 "method": "nvmf_create_subsystem", 00:13:42.083 "req_id": 1 00:13:42.083 } 00:13:42.083 Got JSON-RPC error response 00:13:42.083 response: 00:13:42.083 { 00:13:42.083 "code": -32602, 00:13:42.083 "message": "Invalid cntlid range [1-65520]" 00:13:42.083 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:42.083 22:01:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32420 -i 6 -I 5 00:13:42.083 [2024-07-24 22:01:21.260584] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32420: invalid cntlid range [6-5] 00:13:42.083 22:01:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:42.083 { 00:13:42.083 "nqn": "nqn.2016-06.io.spdk:cnode32420", 00:13:42.083 "min_cntlid": 6, 00:13:42.083 "max_cntlid": 5, 00:13:42.083 "method": "nvmf_create_subsystem", 00:13:42.083 "req_id": 1 00:13:42.083 } 00:13:42.083 Got JSON-RPC error response 00:13:42.083 response: 00:13:42.083 { 00:13:42.083 "code": -32602, 00:13:42.083 "message": "Invalid cntlid range [6-5]" 00:13:42.083 }' 00:13:42.083 22:01:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:42.083 { 00:13:42.083 "nqn": "nqn.2016-06.io.spdk:cnode32420", 00:13:42.083 "min_cntlid": 6, 00:13:42.083 "max_cntlid": 5, 00:13:42.083 "method": "nvmf_create_subsystem", 00:13:42.083 "req_id": 1 00:13:42.083 } 00:13:42.083 Got JSON-RPC error response 00:13:42.083 response: 00:13:42.083 { 00:13:42.083 "code": -32602, 00:13:42.083 "message": "Invalid cntlid range [6-5]" 00:13:42.083 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:42.083 22:01:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:42.341 22:01:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:42.341 { 00:13:42.341 "name": "foobar", 00:13:42.341 "method": "nvmf_delete_target", 00:13:42.341 "req_id": 1 00:13:42.341 } 00:13:42.341 Got JSON-RPC error response 00:13:42.341 response: 00:13:42.341 { 00:13:42.341 "code": -32602, 00:13:42.341 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:42.341 }' 00:13:42.341 22:01:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:42.341 { 00:13:42.341 "name": "foobar", 00:13:42.341 "method": "nvmf_delete_target", 00:13:42.341 "req_id": 1 00:13:42.341 } 00:13:42.341 Got JSON-RPC error response 00:13:42.341 response: 00:13:42.341 { 00:13:42.342 "code": -32602, 00:13:42.342 "message": "The specified target doesn't exist, cannot delete it." 00:13:42.342 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:42.342 22:01:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:42.342 22:01:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:42.342 22:01:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:42.342 22:01:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:13:42.342 22:01:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:42.342 22:01:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:13:42.342 22:01:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:42.342 22:01:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:42.342 rmmod nvme_tcp 00:13:42.342 rmmod nvme_fabrics 00:13:42.342 rmmod nvme_keyring 00:13:42.342 22:01:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:42.342 22:01:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:13:42.342 22:01:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:13:42.342 22:01:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 2640843 ']' 00:13:42.342 22:01:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 2640843 00:13:42.342 22:01:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 2640843 ']' 00:13:42.342 22:01:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 2640843 00:13:42.342 22:01:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:13:42.342 22:01:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:42.342 22:01:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2640843 00:13:42.342 22:01:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:42.342 22:01:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:42.342 22:01:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2640843' 00:13:42.342 killing process with pid 2640843 00:13:42.342 22:01:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 2640843 00:13:42.342 22:01:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 2640843 00:13:42.601 22:01:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:42.601 22:01:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:42.601 22:01:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:42.601 22:01:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:42.601 22:01:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:42.601 22:01:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.601 22:01:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:42.601 22:01:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.131 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:45.131 00:13:45.131 real 0m13.523s 00:13:45.131 user 0m20.293s 00:13:45.131 sys 0m6.502s 00:13:45.131 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:45.131 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:45.131 ************************************ 00:13:45.131 END TEST nvmf_invalid 00:13:45.131 ************************************ 00:13:45.131 22:01:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:45.131 22:01:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:45.131 22:01:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:45.131 22:01:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:45.131 ************************************ 00:13:45.131 START TEST nvmf_connect_stress 00:13:45.131 ************************************ 00:13:45.131 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:45.131 * Looking for test storage... 00:13:45.131 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:45.131 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:45.131 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:45.131 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:45.131 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:45.131 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:45.131 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:45.131 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:45.131 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:45.131 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:45.131 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:45.131 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:45.131 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:45.131 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:13:45.131 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:13:45.131 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:45.131 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:45.131 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:45.131 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:45.131 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:45.131 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:45.131 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:45.131 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:45.131 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.131 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.131 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.131 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:45.131 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.131 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:13:45.131 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:45.131 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:45.131 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:45.132 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:45.132 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:45.132 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:45.132 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:45.132 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:45.132 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:45.132 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:45.132 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:45.132 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:45.132 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:45.132 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:45.132 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.132 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:45.132 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.132 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:45.132 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:45.132 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:45.132 22:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:51.734 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:51.734 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:51.734 Found net devices under 0000:af:00.0: cvl_0_0 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:51.734 Found net devices under 0000:af:00.1: cvl_0_1 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:51.734 22:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:51.734 22:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:51.734 22:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:51.734 22:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:51.734 22:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:51.734 22:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:51.734 22:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:51.734 22:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:51.734 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:51.735 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:13:51.735 00:13:51.735 --- 10.0.0.2 ping statistics --- 00:13:51.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.735 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:13:51.735 22:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:51.735 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:51.735 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:13:51.735 00:13:51.735 --- 10.0.0.1 ping statistics --- 00:13:51.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.735 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:13:51.735 22:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:51.735 22:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:13:51.735 22:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:51.735 22:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:51.735 22:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:51.735 22:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:51.735 22:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:51.735 22:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:51.735 22:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:51.735 22:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:51.735 22:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:51.735 22:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:51.735 22:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.735 22:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2645382 00:13:51.735 22:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:51.735 22:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2645382 00:13:51.735 22:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 2645382 ']' 00:13:51.735 22:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.735 22:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:51.735 22:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.735 22:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:51.735 22:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.735 [2024-07-24 22:01:30.276726] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:13:51.735 [2024-07-24 22:01:30.276776] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:51.735 EAL: No free 2048 kB hugepages reported on node 1 00:13:51.735 [2024-07-24 22:01:30.350791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:51.735 [2024-07-24 22:01:30.418014] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:51.735 [2024-07-24 22:01:30.418058] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:51.735 [2024-07-24 22:01:30.418067] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:51.735 [2024-07-24 22:01:30.418075] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:51.735 [2024-07-24 22:01:30.418082] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:51.735 [2024-07-24 22:01:30.418188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:51.735 [2024-07-24 22:01:30.418270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:51.735 [2024-07-24 22:01:30.418271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:51.993 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:51.993 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:13:51.993 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:51.993 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:51.993 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.993 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:51.993 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:51.993 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.993 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.993 [2024-07-24 22:01:31.141751] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:51.993 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.993 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:51.994 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.994 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.994 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.994 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:51.994 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.994 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.994 [2024-07-24 22:01:31.175808] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:51.994 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.994 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:51.994 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.994 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.994 NULL1 00:13:51.994 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.994 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2645428 00:13:51.994 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:51.994 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:51.994 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:51.994 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:51.994 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:51.994 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:52.252 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:52.252 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:52.252 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:52.252 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:52.252 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:52.252 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:52.252 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:52.252 EAL: No free 2048 kB hugepages reported on node 1 00:13:52.252 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:52.252 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:52.252 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:52.252 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:52.252 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:52.252 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:52.252 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:52.252 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:52.252 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:52.252 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:52.252 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:52.252 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:52.252 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:52.252 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:52.252 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:52.252 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:52.252 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:52.252 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:52.252 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:52.252 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:52.252 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:52.252 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:52.252 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:52.252 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:52.252 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:52.252 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:52.252 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:52.252 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:52.252 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:52.253 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:52.253 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:52.253 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2645428 00:13:52.253 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.253 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.253 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.511 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.511 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2645428 00:13:52.511 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.511 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.511 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.769 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.769 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2645428 00:13:52.769 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.769 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.769 22:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.335 22:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.335 22:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2645428 00:13:53.335 22:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.335 22:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.335 22:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.593 22:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.593 22:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2645428 00:13:53.593 22:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.593 22:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.593 22:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.851 22:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.851 22:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2645428 00:13:53.851 22:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.851 22:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.851 22:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.109 22:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.109 22:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2645428 00:13:54.109 22:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.109 22:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.109 22:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.672 22:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.672 22:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2645428 00:13:54.672 22:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.672 22:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.672 22:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.930 22:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.930 22:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2645428 00:13:54.930 22:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.930 22:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.930 22:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.188 22:01:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.188 22:01:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2645428 00:13:55.188 22:01:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.188 22:01:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.188 22:01:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.446 22:01:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.446 22:01:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2645428 00:13:55.446 22:01:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.446 22:01:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.446 22:01:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.704 22:01:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.704 22:01:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2645428 00:13:55.704 22:01:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.704 22:01:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.704 22:01:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.269 22:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.269 22:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2645428 00:13:56.269 22:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.269 22:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.269 22:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.527 22:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.527 22:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2645428 00:13:56.527 22:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.527 22:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.527 22:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.784 22:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.784 22:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2645428 00:13:56.784 22:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.784 22:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.784 22:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.039 22:01:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.039 22:01:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2645428 00:13:57.039 22:01:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.039 22:01:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.039 22:01:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.296 22:01:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.296 22:01:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2645428 00:13:57.296 22:01:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.296 22:01:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.296 22:01:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.860 22:01:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.860 22:01:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2645428 00:13:57.860 22:01:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.860 22:01:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.861 22:01:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.118 22:01:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.118 22:01:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2645428 00:13:58.118 22:01:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.118 22:01:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.118 22:01:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.375 22:01:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.375 22:01:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2645428 00:13:58.375 22:01:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.375 22:01:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.375 22:01:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.632 22:01:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.632 22:01:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2645428 00:13:58.632 22:01:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.632 22:01:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.632 22:01:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.198 22:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.198 22:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2645428 00:13:59.198 22:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.198 22:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.198 22:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.456 22:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.456 22:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2645428 00:13:59.456 22:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.456 22:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.456 22:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.713 22:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.713 22:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2645428 00:13:59.713 22:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.713 22:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.713 22:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.971 22:01:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.971 22:01:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2645428 00:13:59.971 22:01:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.971 22:01:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.971 22:01:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.229 22:01:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.229 22:01:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2645428 00:14:00.229 22:01:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.229 22:01:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.229 22:01:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.796 22:01:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.796 22:01:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2645428 00:14:00.796 22:01:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.796 22:01:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.796 22:01:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.053 22:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.053 22:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2645428 00:14:01.053 22:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.053 22:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.053 22:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.311 22:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.311 22:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2645428 00:14:01.311 22:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.311 22:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.311 22:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.568 22:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.568 22:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2645428 00:14:01.568 22:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.568 22:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.568 22:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.133 22:01:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.133 22:01:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2645428 00:14:02.133 22:01:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.133 22:01:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.133 22:01:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.133 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:02.391 22:01:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.391 22:01:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2645428 00:14:02.391 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2645428) - No such process 00:14:02.391 22:01:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2645428 00:14:02.391 22:01:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:02.391 22:01:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:02.391 22:01:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:02.391 22:01:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:02.391 22:01:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:14:02.391 22:01:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:02.391 22:01:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:14:02.391 22:01:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:02.391 22:01:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:02.391 rmmod nvme_tcp 00:14:02.391 rmmod nvme_fabrics 00:14:02.391 rmmod nvme_keyring 00:14:02.391 22:01:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:02.391 22:01:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:14:02.391 22:01:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:14:02.391 22:01:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2645382 ']' 00:14:02.391 22:01:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2645382 00:14:02.391 22:01:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 2645382 ']' 00:14:02.391 22:01:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 2645382 00:14:02.391 22:01:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:14:02.391 22:01:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:02.391 22:01:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2645382 00:14:02.391 22:01:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:02.391 22:01:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:02.391 22:01:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2645382' 00:14:02.391 killing process with pid 2645382 00:14:02.391 22:01:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 2645382 00:14:02.391 22:01:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 2645382 00:14:02.650 22:01:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:02.650 22:01:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:02.650 22:01:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:02.650 22:01:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:02.650 22:01:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:02.650 22:01:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:02.650 22:01:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:02.650 22:01:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.590 22:01:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:04.590 00:14:04.590 real 0m19.922s 00:14:04.590 user 0m40.532s 00:14:04.591 sys 0m9.607s 00:14:04.591 22:01:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:04.591 22:01:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.591 ************************************ 00:14:04.591 END TEST nvmf_connect_stress 00:14:04.591 ************************************ 00:14:04.850 22:01:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:04.850 22:01:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:04.850 22:01:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:04.850 22:01:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:04.850 ************************************ 00:14:04.850 START TEST nvmf_fused_ordering 00:14:04.850 ************************************ 00:14:04.850 22:01:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:04.850 * Looking for test storage... 00:14:04.850 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:04.850 22:01:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:04.850 22:01:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:04.850 22:01:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:04.850 22:01:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:04.850 22:01:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:04.850 22:01:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:04.850 22:01:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:04.850 22:01:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:04.850 22:01:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:04.850 22:01:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:04.850 22:01:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:04.850 22:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:04.850 22:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:14:04.850 22:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:14:04.850 22:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:04.850 22:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:04.850 22:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:04.850 22:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:04.850 22:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:04.850 22:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:04.850 22:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:04.850 22:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:04.850 22:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.850 22:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.850 22:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.850 22:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:04.850 22:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.850 22:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:14:04.850 22:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:04.850 22:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:04.850 22:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:04.850 22:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:04.850 22:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:04.850 22:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:04.850 22:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:04.850 22:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:04.850 22:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:04.850 22:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:04.850 22:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:04.850 22:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:04.850 22:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:04.850 22:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:04.850 22:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.850 22:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:04.850 22:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.850 22:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:04.850 22:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:04.850 22:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:14:04.850 22:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:11.413 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:11.413 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:11.413 Found net devices under 0000:af:00.0: cvl_0_0 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:11.413 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:11.414 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:11.414 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:11.414 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:11.414 Found net devices under 0000:af:00.1: cvl_0_1 00:14:11.414 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:11.414 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:11.414 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:14:11.414 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:11.414 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:11.414 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:11.414 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:11.414 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:11.414 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:11.414 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:11.414 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:11.414 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:11.414 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:11.414 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:11.414 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:11.414 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:11.414 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:11.414 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:11.414 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:11.673 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:11.673 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:11.673 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:11.673 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:11.673 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:11.673 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:11.673 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:11.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:11.673 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:14:11.673 00:14:11.673 --- 10.0.0.2 ping statistics --- 00:14:11.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.673 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:14:11.931 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:11.931 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:11.931 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:14:11.931 00:14:11.931 --- 10.0.0.1 ping statistics --- 00:14:11.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.931 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:14:11.931 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:11.931 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:14:11.931 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:11.931 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:11.931 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:11.931 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:11.931 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:11.931 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:11.931 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:11.931 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:11.931 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:11.931 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:11.931 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:11.931 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2650953 00:14:11.931 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:11.931 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2650953 00:14:11.931 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 2650953 ']' 00:14:11.931 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.931 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:11.931 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.931 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:11.931 22:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:11.931 [2024-07-24 22:01:50.987953] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:14:11.931 [2024-07-24 22:01:50.988003] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:11.931 EAL: No free 2048 kB hugepages reported on node 1 00:14:11.931 [2024-07-24 22:01:51.063644] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.931 [2024-07-24 22:01:51.136049] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:11.931 [2024-07-24 22:01:51.136090] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:11.931 [2024-07-24 22:01:51.136100] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:11.931 [2024-07-24 22:01:51.136108] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:11.931 [2024-07-24 22:01:51.136116] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:11.931 [2024-07-24 22:01:51.136138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:12.866 22:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:12.866 22:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:14:12.866 22:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:12.866 22:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:12.866 22:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:12.866 22:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:12.866 22:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:12.866 22:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.866 22:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:12.866 [2024-07-24 22:01:51.834959] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:12.866 22:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.866 22:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:12.866 22:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.866 22:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:12.866 22:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.866 22:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:12.866 22:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.866 22:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:12.866 [2024-07-24 22:01:51.855144] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:12.866 22:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.866 22:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:12.866 22:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.866 22:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:12.866 NULL1 00:14:12.866 22:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.866 22:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:12.866 22:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.866 22:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:12.866 22:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.866 22:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:12.866 22:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.866 22:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:12.866 22:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.866 22:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:12.866 [2024-07-24 22:01:51.912204] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:14:12.866 [2024-07-24 22:01:51.912248] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2651046 ] 00:14:12.866 EAL: No free 2048 kB hugepages reported on node 1 00:14:13.431 Attached to nqn.2016-06.io.spdk:cnode1 00:14:13.431 Namespace ID: 1 size: 1GB 00:14:13.431 fused_ordering(0) 00:14:13.431 fused_ordering(1) 00:14:13.431 fused_ordering(2) 00:14:13.431 fused_ordering(3) 00:14:13.431 fused_ordering(4) 00:14:13.431 fused_ordering(5) 00:14:13.431 fused_ordering(6) 00:14:13.431 fused_ordering(7) 00:14:13.431 fused_ordering(8) 00:14:13.431 fused_ordering(9) 00:14:13.431 fused_ordering(10) 00:14:13.431 fused_ordering(11) 00:14:13.431 fused_ordering(12) 00:14:13.431 fused_ordering(13) 00:14:13.431 fused_ordering(14) 00:14:13.431 fused_ordering(15) 00:14:13.431 fused_ordering(16) 00:14:13.431 fused_ordering(17) 00:14:13.431 fused_ordering(18) 00:14:13.431 fused_ordering(19) 00:14:13.431 fused_ordering(20) 00:14:13.431 fused_ordering(21) 00:14:13.431 fused_ordering(22) 00:14:13.431 fused_ordering(23) 00:14:13.431 fused_ordering(24) 00:14:13.431 fused_ordering(25) 00:14:13.431 fused_ordering(26) 00:14:13.431 fused_ordering(27) 00:14:13.431 fused_ordering(28) 00:14:13.431 fused_ordering(29) 00:14:13.431 fused_ordering(30) 00:14:13.432 fused_ordering(31) 00:14:13.432 fused_ordering(32) 00:14:13.432 fused_ordering(33) 00:14:13.432 fused_ordering(34) 00:14:13.432 fused_ordering(35) 00:14:13.432 fused_ordering(36) 00:14:13.432 fused_ordering(37) 00:14:13.432 fused_ordering(38) 00:14:13.432 fused_ordering(39) 00:14:13.432 fused_ordering(40) 00:14:13.432 fused_ordering(41) 00:14:13.432 fused_ordering(42) 00:14:13.432 fused_ordering(43) 00:14:13.432 fused_ordering(44) 00:14:13.432 fused_ordering(45) 00:14:13.432 fused_ordering(46) 00:14:13.432 fused_ordering(47) 00:14:13.432 fused_ordering(48) 00:14:13.432 fused_ordering(49) 00:14:13.432 fused_ordering(50) 00:14:13.432 fused_ordering(51) 00:14:13.432 fused_ordering(52) 00:14:13.432 fused_ordering(53) 00:14:13.432 fused_ordering(54) 00:14:13.432 fused_ordering(55) 00:14:13.432 fused_ordering(56) 00:14:13.432 fused_ordering(57) 00:14:13.432 fused_ordering(58) 00:14:13.432 fused_ordering(59) 00:14:13.432 fused_ordering(60) 00:14:13.432 fused_ordering(61) 00:14:13.432 fused_ordering(62) 00:14:13.432 fused_ordering(63) 00:14:13.432 fused_ordering(64) 00:14:13.432 fused_ordering(65) 00:14:13.432 fused_ordering(66) 00:14:13.432 fused_ordering(67) 00:14:13.432 fused_ordering(68) 00:14:13.432 fused_ordering(69) 00:14:13.432 fused_ordering(70) 00:14:13.432 fused_ordering(71) 00:14:13.432 fused_ordering(72) 00:14:13.432 fused_ordering(73) 00:14:13.432 fused_ordering(74) 00:14:13.432 fused_ordering(75) 00:14:13.432 fused_ordering(76) 00:14:13.432 fused_ordering(77) 00:14:13.432 fused_ordering(78) 00:14:13.432 fused_ordering(79) 00:14:13.432 fused_ordering(80) 00:14:13.432 fused_ordering(81) 00:14:13.432 fused_ordering(82) 00:14:13.432 fused_ordering(83) 00:14:13.432 fused_ordering(84) 00:14:13.432 fused_ordering(85) 00:14:13.432 fused_ordering(86) 00:14:13.432 fused_ordering(87) 00:14:13.432 fused_ordering(88) 00:14:13.432 fused_ordering(89) 00:14:13.432 fused_ordering(90) 00:14:13.432 fused_ordering(91) 00:14:13.432 fused_ordering(92) 00:14:13.432 fused_ordering(93) 00:14:13.432 fused_ordering(94) 00:14:13.432 fused_ordering(95) 00:14:13.432 fused_ordering(96) 00:14:13.432 fused_ordering(97) 00:14:13.432 fused_ordering(98) 00:14:13.432 fused_ordering(99) 00:14:13.432 fused_ordering(100) 00:14:13.432 fused_ordering(101) 00:14:13.432 fused_ordering(102) 00:14:13.432 fused_ordering(103) 00:14:13.432 fused_ordering(104) 00:14:13.432 fused_ordering(105) 00:14:13.432 fused_ordering(106) 00:14:13.432 fused_ordering(107) 00:14:13.432 fused_ordering(108) 00:14:13.432 fused_ordering(109) 00:14:13.432 fused_ordering(110) 00:14:13.432 fused_ordering(111) 00:14:13.432 fused_ordering(112) 00:14:13.432 fused_ordering(113) 00:14:13.432 fused_ordering(114) 00:14:13.432 fused_ordering(115) 00:14:13.432 fused_ordering(116) 00:14:13.432 fused_ordering(117) 00:14:13.432 fused_ordering(118) 00:14:13.432 fused_ordering(119) 00:14:13.432 fused_ordering(120) 00:14:13.432 fused_ordering(121) 00:14:13.432 fused_ordering(122) 00:14:13.432 fused_ordering(123) 00:14:13.432 fused_ordering(124) 00:14:13.432 fused_ordering(125) 00:14:13.432 fused_ordering(126) 00:14:13.432 fused_ordering(127) 00:14:13.432 fused_ordering(128) 00:14:13.432 fused_ordering(129) 00:14:13.432 fused_ordering(130) 00:14:13.432 fused_ordering(131) 00:14:13.432 fused_ordering(132) 00:14:13.432 fused_ordering(133) 00:14:13.432 fused_ordering(134) 00:14:13.432 fused_ordering(135) 00:14:13.432 fused_ordering(136) 00:14:13.432 fused_ordering(137) 00:14:13.432 fused_ordering(138) 00:14:13.432 fused_ordering(139) 00:14:13.432 fused_ordering(140) 00:14:13.432 fused_ordering(141) 00:14:13.432 fused_ordering(142) 00:14:13.432 fused_ordering(143) 00:14:13.432 fused_ordering(144) 00:14:13.432 fused_ordering(145) 00:14:13.432 fused_ordering(146) 00:14:13.432 fused_ordering(147) 00:14:13.432 fused_ordering(148) 00:14:13.432 fused_ordering(149) 00:14:13.432 fused_ordering(150) 00:14:13.432 fused_ordering(151) 00:14:13.432 fused_ordering(152) 00:14:13.432 fused_ordering(153) 00:14:13.432 fused_ordering(154) 00:14:13.432 fused_ordering(155) 00:14:13.432 fused_ordering(156) 00:14:13.432 fused_ordering(157) 00:14:13.432 fused_ordering(158) 00:14:13.432 fused_ordering(159) 00:14:13.432 fused_ordering(160) 00:14:13.432 fused_ordering(161) 00:14:13.432 fused_ordering(162) 00:14:13.432 fused_ordering(163) 00:14:13.432 fused_ordering(164) 00:14:13.432 fused_ordering(165) 00:14:13.432 fused_ordering(166) 00:14:13.432 fused_ordering(167) 00:14:13.432 fused_ordering(168) 00:14:13.432 fused_ordering(169) 00:14:13.432 fused_ordering(170) 00:14:13.432 fused_ordering(171) 00:14:13.432 fused_ordering(172) 00:14:13.432 fused_ordering(173) 00:14:13.432 fused_ordering(174) 00:14:13.432 fused_ordering(175) 00:14:13.432 fused_ordering(176) 00:14:13.432 fused_ordering(177) 00:14:13.432 fused_ordering(178) 00:14:13.432 fused_ordering(179) 00:14:13.432 fused_ordering(180) 00:14:13.432 fused_ordering(181) 00:14:13.432 fused_ordering(182) 00:14:13.432 fused_ordering(183) 00:14:13.432 fused_ordering(184) 00:14:13.432 fused_ordering(185) 00:14:13.432 fused_ordering(186) 00:14:13.432 fused_ordering(187) 00:14:13.432 fused_ordering(188) 00:14:13.432 fused_ordering(189) 00:14:13.432 fused_ordering(190) 00:14:13.432 fused_ordering(191) 00:14:13.432 fused_ordering(192) 00:14:13.432 fused_ordering(193) 00:14:13.432 fused_ordering(194) 00:14:13.432 fused_ordering(195) 00:14:13.432 fused_ordering(196) 00:14:13.432 fused_ordering(197) 00:14:13.432 fused_ordering(198) 00:14:13.432 fused_ordering(199) 00:14:13.432 fused_ordering(200) 00:14:13.432 fused_ordering(201) 00:14:13.432 fused_ordering(202) 00:14:13.432 fused_ordering(203) 00:14:13.432 fused_ordering(204) 00:14:13.432 fused_ordering(205) 00:14:13.690 fused_ordering(206) 00:14:13.691 fused_ordering(207) 00:14:13.691 fused_ordering(208) 00:14:13.691 fused_ordering(209) 00:14:13.691 fused_ordering(210) 00:14:13.691 fused_ordering(211) 00:14:13.691 fused_ordering(212) 00:14:13.691 fused_ordering(213) 00:14:13.691 fused_ordering(214) 00:14:13.691 fused_ordering(215) 00:14:13.691 fused_ordering(216) 00:14:13.691 fused_ordering(217) 00:14:13.691 fused_ordering(218) 00:14:13.691 fused_ordering(219) 00:14:13.691 fused_ordering(220) 00:14:13.691 fused_ordering(221) 00:14:13.691 fused_ordering(222) 00:14:13.691 fused_ordering(223) 00:14:13.691 fused_ordering(224) 00:14:13.691 fused_ordering(225) 00:14:13.691 fused_ordering(226) 00:14:13.691 fused_ordering(227) 00:14:13.691 fused_ordering(228) 00:14:13.691 fused_ordering(229) 00:14:13.691 fused_ordering(230) 00:14:13.691 fused_ordering(231) 00:14:13.691 fused_ordering(232) 00:14:13.691 fused_ordering(233) 00:14:13.691 fused_ordering(234) 00:14:13.691 fused_ordering(235) 00:14:13.691 fused_ordering(236) 00:14:13.691 fused_ordering(237) 00:14:13.691 fused_ordering(238) 00:14:13.691 fused_ordering(239) 00:14:13.691 fused_ordering(240) 00:14:13.691 fused_ordering(241) 00:14:13.691 fused_ordering(242) 00:14:13.691 fused_ordering(243) 00:14:13.691 fused_ordering(244) 00:14:13.691 fused_ordering(245) 00:14:13.691 fused_ordering(246) 00:14:13.691 fused_ordering(247) 00:14:13.691 fused_ordering(248) 00:14:13.691 fused_ordering(249) 00:14:13.691 fused_ordering(250) 00:14:13.691 fused_ordering(251) 00:14:13.691 fused_ordering(252) 00:14:13.691 fused_ordering(253) 00:14:13.691 fused_ordering(254) 00:14:13.691 fused_ordering(255) 00:14:13.691 fused_ordering(256) 00:14:13.691 fused_ordering(257) 00:14:13.691 fused_ordering(258) 00:14:13.691 fused_ordering(259) 00:14:13.691 fused_ordering(260) 00:14:13.691 fused_ordering(261) 00:14:13.691 fused_ordering(262) 00:14:13.691 fused_ordering(263) 00:14:13.691 fused_ordering(264) 00:14:13.691 fused_ordering(265) 00:14:13.691 fused_ordering(266) 00:14:13.691 fused_ordering(267) 00:14:13.691 fused_ordering(268) 00:14:13.691 fused_ordering(269) 00:14:13.691 fused_ordering(270) 00:14:13.691 fused_ordering(271) 00:14:13.691 fused_ordering(272) 00:14:13.691 fused_ordering(273) 00:14:13.691 fused_ordering(274) 00:14:13.691 fused_ordering(275) 00:14:13.691 fused_ordering(276) 00:14:13.691 fused_ordering(277) 00:14:13.691 fused_ordering(278) 00:14:13.691 fused_ordering(279) 00:14:13.691 fused_ordering(280) 00:14:13.691 fused_ordering(281) 00:14:13.691 fused_ordering(282) 00:14:13.691 fused_ordering(283) 00:14:13.691 fused_ordering(284) 00:14:13.691 fused_ordering(285) 00:14:13.691 fused_ordering(286) 00:14:13.691 fused_ordering(287) 00:14:13.691 fused_ordering(288) 00:14:13.691 fused_ordering(289) 00:14:13.691 fused_ordering(290) 00:14:13.691 fused_ordering(291) 00:14:13.691 fused_ordering(292) 00:14:13.691 fused_ordering(293) 00:14:13.691 fused_ordering(294) 00:14:13.691 fused_ordering(295) 00:14:13.691 fused_ordering(296) 00:14:13.691 fused_ordering(297) 00:14:13.691 fused_ordering(298) 00:14:13.691 fused_ordering(299) 00:14:13.691 fused_ordering(300) 00:14:13.691 fused_ordering(301) 00:14:13.691 fused_ordering(302) 00:14:13.691 fused_ordering(303) 00:14:13.691 fused_ordering(304) 00:14:13.691 fused_ordering(305) 00:14:13.691 fused_ordering(306) 00:14:13.691 fused_ordering(307) 00:14:13.691 fused_ordering(308) 00:14:13.691 fused_ordering(309) 00:14:13.691 fused_ordering(310) 00:14:13.691 fused_ordering(311) 00:14:13.691 fused_ordering(312) 00:14:13.691 fused_ordering(313) 00:14:13.691 fused_ordering(314) 00:14:13.691 fused_ordering(315) 00:14:13.691 fused_ordering(316) 00:14:13.691 fused_ordering(317) 00:14:13.691 fused_ordering(318) 00:14:13.691 fused_ordering(319) 00:14:13.691 fused_ordering(320) 00:14:13.691 fused_ordering(321) 00:14:13.691 fused_ordering(322) 00:14:13.691 fused_ordering(323) 00:14:13.691 fused_ordering(324) 00:14:13.691 fused_ordering(325) 00:14:13.691 fused_ordering(326) 00:14:13.691 fused_ordering(327) 00:14:13.691 fused_ordering(328) 00:14:13.691 fused_ordering(329) 00:14:13.691 fused_ordering(330) 00:14:13.691 fused_ordering(331) 00:14:13.691 fused_ordering(332) 00:14:13.691 fused_ordering(333) 00:14:13.691 fused_ordering(334) 00:14:13.691 fused_ordering(335) 00:14:13.691 fused_ordering(336) 00:14:13.691 fused_ordering(337) 00:14:13.691 fused_ordering(338) 00:14:13.691 fused_ordering(339) 00:14:13.691 fused_ordering(340) 00:14:13.691 fused_ordering(341) 00:14:13.691 fused_ordering(342) 00:14:13.691 fused_ordering(343) 00:14:13.691 fused_ordering(344) 00:14:13.691 fused_ordering(345) 00:14:13.691 fused_ordering(346) 00:14:13.691 fused_ordering(347) 00:14:13.691 fused_ordering(348) 00:14:13.691 fused_ordering(349) 00:14:13.691 fused_ordering(350) 00:14:13.691 fused_ordering(351) 00:14:13.691 fused_ordering(352) 00:14:13.691 fused_ordering(353) 00:14:13.691 fused_ordering(354) 00:14:13.691 fused_ordering(355) 00:14:13.691 fused_ordering(356) 00:14:13.691 fused_ordering(357) 00:14:13.691 fused_ordering(358) 00:14:13.691 fused_ordering(359) 00:14:13.691 fused_ordering(360) 00:14:13.691 fused_ordering(361) 00:14:13.691 fused_ordering(362) 00:14:13.691 fused_ordering(363) 00:14:13.691 fused_ordering(364) 00:14:13.691 fused_ordering(365) 00:14:13.691 fused_ordering(366) 00:14:13.691 fused_ordering(367) 00:14:13.691 fused_ordering(368) 00:14:13.691 fused_ordering(369) 00:14:13.691 fused_ordering(370) 00:14:13.691 fused_ordering(371) 00:14:13.691 fused_ordering(372) 00:14:13.691 fused_ordering(373) 00:14:13.691 fused_ordering(374) 00:14:13.691 fused_ordering(375) 00:14:13.691 fused_ordering(376) 00:14:13.691 fused_ordering(377) 00:14:13.691 fused_ordering(378) 00:14:13.691 fused_ordering(379) 00:14:13.691 fused_ordering(380) 00:14:13.691 fused_ordering(381) 00:14:13.691 fused_ordering(382) 00:14:13.691 fused_ordering(383) 00:14:13.691 fused_ordering(384) 00:14:13.691 fused_ordering(385) 00:14:13.691 fused_ordering(386) 00:14:13.691 fused_ordering(387) 00:14:13.691 fused_ordering(388) 00:14:13.691 fused_ordering(389) 00:14:13.691 fused_ordering(390) 00:14:13.691 fused_ordering(391) 00:14:13.691 fused_ordering(392) 00:14:13.691 fused_ordering(393) 00:14:13.691 fused_ordering(394) 00:14:13.691 fused_ordering(395) 00:14:13.691 fused_ordering(396) 00:14:13.691 fused_ordering(397) 00:14:13.691 fused_ordering(398) 00:14:13.691 fused_ordering(399) 00:14:13.691 fused_ordering(400) 00:14:13.691 fused_ordering(401) 00:14:13.691 fused_ordering(402) 00:14:13.691 fused_ordering(403) 00:14:13.691 fused_ordering(404) 00:14:13.691 fused_ordering(405) 00:14:13.691 fused_ordering(406) 00:14:13.691 fused_ordering(407) 00:14:13.691 fused_ordering(408) 00:14:13.691 fused_ordering(409) 00:14:13.691 fused_ordering(410) 00:14:13.949 fused_ordering(411) 00:14:13.949 fused_ordering(412) 00:14:13.949 fused_ordering(413) 00:14:13.949 fused_ordering(414) 00:14:13.949 fused_ordering(415) 00:14:13.949 fused_ordering(416) 00:14:13.949 fused_ordering(417) 00:14:13.949 fused_ordering(418) 00:14:13.949 fused_ordering(419) 00:14:13.950 fused_ordering(420) 00:14:13.950 fused_ordering(421) 00:14:13.950 fused_ordering(422) 00:14:13.950 fused_ordering(423) 00:14:13.950 fused_ordering(424) 00:14:13.950 fused_ordering(425) 00:14:13.950 fused_ordering(426) 00:14:13.950 fused_ordering(427) 00:14:13.950 fused_ordering(428) 00:14:13.950 fused_ordering(429) 00:14:13.950 fused_ordering(430) 00:14:13.950 fused_ordering(431) 00:14:13.950 fused_ordering(432) 00:14:13.950 fused_ordering(433) 00:14:13.950 fused_ordering(434) 00:14:13.950 fused_ordering(435) 00:14:13.950 fused_ordering(436) 00:14:13.950 fused_ordering(437) 00:14:13.950 fused_ordering(438) 00:14:13.950 fused_ordering(439) 00:14:13.950 fused_ordering(440) 00:14:13.950 fused_ordering(441) 00:14:13.950 fused_ordering(442) 00:14:13.950 fused_ordering(443) 00:14:13.950 fused_ordering(444) 00:14:13.950 fused_ordering(445) 00:14:13.950 fused_ordering(446) 00:14:13.950 fused_ordering(447) 00:14:13.950 fused_ordering(448) 00:14:13.950 fused_ordering(449) 00:14:13.950 fused_ordering(450) 00:14:13.950 fused_ordering(451) 00:14:13.950 fused_ordering(452) 00:14:13.950 fused_ordering(453) 00:14:13.950 fused_ordering(454) 00:14:13.950 fused_ordering(455) 00:14:13.950 fused_ordering(456) 00:14:13.950 fused_ordering(457) 00:14:13.950 fused_ordering(458) 00:14:13.950 fused_ordering(459) 00:14:13.950 fused_ordering(460) 00:14:13.950 fused_ordering(461) 00:14:13.950 fused_ordering(462) 00:14:13.950 fused_ordering(463) 00:14:13.950 fused_ordering(464) 00:14:13.950 fused_ordering(465) 00:14:13.950 fused_ordering(466) 00:14:13.950 fused_ordering(467) 00:14:13.950 fused_ordering(468) 00:14:13.950 fused_ordering(469) 00:14:13.950 fused_ordering(470) 00:14:13.950 fused_ordering(471) 00:14:13.950 fused_ordering(472) 00:14:13.950 fused_ordering(473) 00:14:13.950 fused_ordering(474) 00:14:13.950 fused_ordering(475) 00:14:13.950 fused_ordering(476) 00:14:13.950 fused_ordering(477) 00:14:13.950 fused_ordering(478) 00:14:13.950 fused_ordering(479) 00:14:13.950 fused_ordering(480) 00:14:13.950 fused_ordering(481) 00:14:13.950 fused_ordering(482) 00:14:13.950 fused_ordering(483) 00:14:13.950 fused_ordering(484) 00:14:13.950 fused_ordering(485) 00:14:13.950 fused_ordering(486) 00:14:13.950 fused_ordering(487) 00:14:13.950 fused_ordering(488) 00:14:13.950 fused_ordering(489) 00:14:13.950 fused_ordering(490) 00:14:13.950 fused_ordering(491) 00:14:13.950 fused_ordering(492) 00:14:13.950 fused_ordering(493) 00:14:13.950 fused_ordering(494) 00:14:13.950 fused_ordering(495) 00:14:13.950 fused_ordering(496) 00:14:13.950 fused_ordering(497) 00:14:13.950 fused_ordering(498) 00:14:13.950 fused_ordering(499) 00:14:13.950 fused_ordering(500) 00:14:13.950 fused_ordering(501) 00:14:13.950 fused_ordering(502) 00:14:13.950 fused_ordering(503) 00:14:13.950 fused_ordering(504) 00:14:13.950 fused_ordering(505) 00:14:13.950 fused_ordering(506) 00:14:13.950 fused_ordering(507) 00:14:13.950 fused_ordering(508) 00:14:13.950 fused_ordering(509) 00:14:13.950 fused_ordering(510) 00:14:13.950 fused_ordering(511) 00:14:13.950 fused_ordering(512) 00:14:13.950 fused_ordering(513) 00:14:13.950 fused_ordering(514) 00:14:13.950 fused_ordering(515) 00:14:13.950 fused_ordering(516) 00:14:13.950 fused_ordering(517) 00:14:13.950 fused_ordering(518) 00:14:13.950 fused_ordering(519) 00:14:13.950 fused_ordering(520) 00:14:13.950 fused_ordering(521) 00:14:13.950 fused_ordering(522) 00:14:13.950 fused_ordering(523) 00:14:13.950 fused_ordering(524) 00:14:13.950 fused_ordering(525) 00:14:13.950 fused_ordering(526) 00:14:13.950 fused_ordering(527) 00:14:13.950 fused_ordering(528) 00:14:13.950 fused_ordering(529) 00:14:13.950 fused_ordering(530) 00:14:13.950 fused_ordering(531) 00:14:13.950 fused_ordering(532) 00:14:13.950 fused_ordering(533) 00:14:13.950 fused_ordering(534) 00:14:13.950 fused_ordering(535) 00:14:13.950 fused_ordering(536) 00:14:13.950 fused_ordering(537) 00:14:13.950 fused_ordering(538) 00:14:13.950 fused_ordering(539) 00:14:13.950 fused_ordering(540) 00:14:13.950 fused_ordering(541) 00:14:13.950 fused_ordering(542) 00:14:13.950 fused_ordering(543) 00:14:13.950 fused_ordering(544) 00:14:13.950 fused_ordering(545) 00:14:13.950 fused_ordering(546) 00:14:13.950 fused_ordering(547) 00:14:13.950 fused_ordering(548) 00:14:13.950 fused_ordering(549) 00:14:13.950 fused_ordering(550) 00:14:13.950 fused_ordering(551) 00:14:13.950 fused_ordering(552) 00:14:13.950 fused_ordering(553) 00:14:13.950 fused_ordering(554) 00:14:13.950 fused_ordering(555) 00:14:13.950 fused_ordering(556) 00:14:13.950 fused_ordering(557) 00:14:13.950 fused_ordering(558) 00:14:13.950 fused_ordering(559) 00:14:13.950 fused_ordering(560) 00:14:13.950 fused_ordering(561) 00:14:13.950 fused_ordering(562) 00:14:13.950 fused_ordering(563) 00:14:13.950 fused_ordering(564) 00:14:13.950 fused_ordering(565) 00:14:13.950 fused_ordering(566) 00:14:13.950 fused_ordering(567) 00:14:13.950 fused_ordering(568) 00:14:13.950 fused_ordering(569) 00:14:13.950 fused_ordering(570) 00:14:13.950 fused_ordering(571) 00:14:13.950 fused_ordering(572) 00:14:13.950 fused_ordering(573) 00:14:13.950 fused_ordering(574) 00:14:13.950 fused_ordering(575) 00:14:13.950 fused_ordering(576) 00:14:13.950 fused_ordering(577) 00:14:13.950 fused_ordering(578) 00:14:13.950 fused_ordering(579) 00:14:13.950 fused_ordering(580) 00:14:13.950 fused_ordering(581) 00:14:13.950 fused_ordering(582) 00:14:13.950 fused_ordering(583) 00:14:13.950 fused_ordering(584) 00:14:13.950 fused_ordering(585) 00:14:13.950 fused_ordering(586) 00:14:13.950 fused_ordering(587) 00:14:13.950 fused_ordering(588) 00:14:13.950 fused_ordering(589) 00:14:13.950 fused_ordering(590) 00:14:13.950 fused_ordering(591) 00:14:13.950 fused_ordering(592) 00:14:13.950 fused_ordering(593) 00:14:13.950 fused_ordering(594) 00:14:13.950 fused_ordering(595) 00:14:13.950 fused_ordering(596) 00:14:13.950 fused_ordering(597) 00:14:13.950 fused_ordering(598) 00:14:13.950 fused_ordering(599) 00:14:13.950 fused_ordering(600) 00:14:13.950 fused_ordering(601) 00:14:13.950 fused_ordering(602) 00:14:13.950 fused_ordering(603) 00:14:13.950 fused_ordering(604) 00:14:13.950 fused_ordering(605) 00:14:13.950 fused_ordering(606) 00:14:13.950 fused_ordering(607) 00:14:13.950 fused_ordering(608) 00:14:13.950 fused_ordering(609) 00:14:13.950 fused_ordering(610) 00:14:13.950 fused_ordering(611) 00:14:13.950 fused_ordering(612) 00:14:13.950 fused_ordering(613) 00:14:13.950 fused_ordering(614) 00:14:13.950 fused_ordering(615) 00:14:14.515 fused_ordering(616) 00:14:14.515 fused_ordering(617) 00:14:14.515 fused_ordering(618) 00:14:14.515 fused_ordering(619) 00:14:14.515 fused_ordering(620) 00:14:14.515 fused_ordering(621) 00:14:14.515 fused_ordering(622) 00:14:14.515 fused_ordering(623) 00:14:14.515 fused_ordering(624) 00:14:14.515 fused_ordering(625) 00:14:14.515 fused_ordering(626) 00:14:14.515 fused_ordering(627) 00:14:14.515 fused_ordering(628) 00:14:14.515 fused_ordering(629) 00:14:14.516 fused_ordering(630) 00:14:14.516 fused_ordering(631) 00:14:14.516 fused_ordering(632) 00:14:14.516 fused_ordering(633) 00:14:14.516 fused_ordering(634) 00:14:14.516 fused_ordering(635) 00:14:14.516 fused_ordering(636) 00:14:14.516 fused_ordering(637) 00:14:14.516 fused_ordering(638) 00:14:14.516 fused_ordering(639) 00:14:14.516 fused_ordering(640) 00:14:14.516 fused_ordering(641) 00:14:14.516 fused_ordering(642) 00:14:14.516 fused_ordering(643) 00:14:14.516 fused_ordering(644) 00:14:14.516 fused_ordering(645) 00:14:14.516 fused_ordering(646) 00:14:14.516 fused_ordering(647) 00:14:14.516 fused_ordering(648) 00:14:14.516 fused_ordering(649) 00:14:14.516 fused_ordering(650) 00:14:14.516 fused_ordering(651) 00:14:14.516 fused_ordering(652) 00:14:14.516 fused_ordering(653) 00:14:14.516 fused_ordering(654) 00:14:14.516 fused_ordering(655) 00:14:14.516 fused_ordering(656) 00:14:14.516 fused_ordering(657) 00:14:14.516 fused_ordering(658) 00:14:14.516 fused_ordering(659) 00:14:14.516 fused_ordering(660) 00:14:14.516 fused_ordering(661) 00:14:14.516 fused_ordering(662) 00:14:14.516 fused_ordering(663) 00:14:14.516 fused_ordering(664) 00:14:14.516 fused_ordering(665) 00:14:14.516 fused_ordering(666) 00:14:14.516 fused_ordering(667) 00:14:14.516 fused_ordering(668) 00:14:14.516 fused_ordering(669) 00:14:14.516 fused_ordering(670) 00:14:14.516 fused_ordering(671) 00:14:14.516 fused_ordering(672) 00:14:14.516 fused_ordering(673) 00:14:14.516 fused_ordering(674) 00:14:14.516 fused_ordering(675) 00:14:14.516 fused_ordering(676) 00:14:14.516 fused_ordering(677) 00:14:14.516 fused_ordering(678) 00:14:14.516 fused_ordering(679) 00:14:14.516 fused_ordering(680) 00:14:14.516 fused_ordering(681) 00:14:14.516 fused_ordering(682) 00:14:14.516 fused_ordering(683) 00:14:14.516 fused_ordering(684) 00:14:14.516 fused_ordering(685) 00:14:14.516 fused_ordering(686) 00:14:14.516 fused_ordering(687) 00:14:14.516 fused_ordering(688) 00:14:14.516 fused_ordering(689) 00:14:14.516 fused_ordering(690) 00:14:14.516 fused_ordering(691) 00:14:14.516 fused_ordering(692) 00:14:14.516 fused_ordering(693) 00:14:14.516 fused_ordering(694) 00:14:14.516 fused_ordering(695) 00:14:14.516 fused_ordering(696) 00:14:14.516 fused_ordering(697) 00:14:14.516 fused_ordering(698) 00:14:14.516 fused_ordering(699) 00:14:14.516 fused_ordering(700) 00:14:14.516 fused_ordering(701) 00:14:14.516 fused_ordering(702) 00:14:14.516 fused_ordering(703) 00:14:14.516 fused_ordering(704) 00:14:14.516 fused_ordering(705) 00:14:14.516 fused_ordering(706) 00:14:14.516 fused_ordering(707) 00:14:14.516 fused_ordering(708) 00:14:14.516 fused_ordering(709) 00:14:14.516 fused_ordering(710) 00:14:14.516 fused_ordering(711) 00:14:14.516 fused_ordering(712) 00:14:14.516 fused_ordering(713) 00:14:14.516 fused_ordering(714) 00:14:14.516 fused_ordering(715) 00:14:14.516 fused_ordering(716) 00:14:14.516 fused_ordering(717) 00:14:14.516 fused_ordering(718) 00:14:14.516 fused_ordering(719) 00:14:14.516 fused_ordering(720) 00:14:14.516 fused_ordering(721) 00:14:14.516 fused_ordering(722) 00:14:14.516 fused_ordering(723) 00:14:14.516 fused_ordering(724) 00:14:14.516 fused_ordering(725) 00:14:14.516 fused_ordering(726) 00:14:14.516 fused_ordering(727) 00:14:14.516 fused_ordering(728) 00:14:14.516 fused_ordering(729) 00:14:14.516 fused_ordering(730) 00:14:14.516 fused_ordering(731) 00:14:14.516 fused_ordering(732) 00:14:14.516 fused_ordering(733) 00:14:14.516 fused_ordering(734) 00:14:14.516 fused_ordering(735) 00:14:14.516 fused_ordering(736) 00:14:14.516 fused_ordering(737) 00:14:14.516 fused_ordering(738) 00:14:14.516 fused_ordering(739) 00:14:14.516 fused_ordering(740) 00:14:14.516 fused_ordering(741) 00:14:14.516 fused_ordering(742) 00:14:14.516 fused_ordering(743) 00:14:14.516 fused_ordering(744) 00:14:14.516 fused_ordering(745) 00:14:14.516 fused_ordering(746) 00:14:14.516 fused_ordering(747) 00:14:14.516 fused_ordering(748) 00:14:14.516 fused_ordering(749) 00:14:14.516 fused_ordering(750) 00:14:14.516 fused_ordering(751) 00:14:14.516 fused_ordering(752) 00:14:14.516 fused_ordering(753) 00:14:14.516 fused_ordering(754) 00:14:14.516 fused_ordering(755) 00:14:14.516 fused_ordering(756) 00:14:14.516 fused_ordering(757) 00:14:14.516 fused_ordering(758) 00:14:14.516 fused_ordering(759) 00:14:14.516 fused_ordering(760) 00:14:14.516 fused_ordering(761) 00:14:14.516 fused_ordering(762) 00:14:14.516 fused_ordering(763) 00:14:14.516 fused_ordering(764) 00:14:14.516 fused_ordering(765) 00:14:14.516 fused_ordering(766) 00:14:14.516 fused_ordering(767) 00:14:14.516 fused_ordering(768) 00:14:14.516 fused_ordering(769) 00:14:14.516 fused_ordering(770) 00:14:14.516 fused_ordering(771) 00:14:14.516 fused_ordering(772) 00:14:14.516 fused_ordering(773) 00:14:14.516 fused_ordering(774) 00:14:14.516 fused_ordering(775) 00:14:14.516 fused_ordering(776) 00:14:14.516 fused_ordering(777) 00:14:14.516 fused_ordering(778) 00:14:14.516 fused_ordering(779) 00:14:14.516 fused_ordering(780) 00:14:14.516 fused_ordering(781) 00:14:14.516 fused_ordering(782) 00:14:14.516 fused_ordering(783) 00:14:14.516 fused_ordering(784) 00:14:14.516 fused_ordering(785) 00:14:14.516 fused_ordering(786) 00:14:14.516 fused_ordering(787) 00:14:14.516 fused_ordering(788) 00:14:14.516 fused_ordering(789) 00:14:14.516 fused_ordering(790) 00:14:14.516 fused_ordering(791) 00:14:14.516 fused_ordering(792) 00:14:14.516 fused_ordering(793) 00:14:14.516 fused_ordering(794) 00:14:14.516 fused_ordering(795) 00:14:14.516 fused_ordering(796) 00:14:14.516 fused_ordering(797) 00:14:14.516 fused_ordering(798) 00:14:14.516 fused_ordering(799) 00:14:14.516 fused_ordering(800) 00:14:14.516 fused_ordering(801) 00:14:14.516 fused_ordering(802) 00:14:14.516 fused_ordering(803) 00:14:14.516 fused_ordering(804) 00:14:14.516 fused_ordering(805) 00:14:14.516 fused_ordering(806) 00:14:14.516 fused_ordering(807) 00:14:14.516 fused_ordering(808) 00:14:14.516 fused_ordering(809) 00:14:14.516 fused_ordering(810) 00:14:14.516 fused_ordering(811) 00:14:14.516 fused_ordering(812) 00:14:14.516 fused_ordering(813) 00:14:14.516 fused_ordering(814) 00:14:14.516 fused_ordering(815) 00:14:14.516 fused_ordering(816) 00:14:14.516 fused_ordering(817) 00:14:14.516 fused_ordering(818) 00:14:14.516 fused_ordering(819) 00:14:14.516 fused_ordering(820) 00:14:15.082 fused_ordering(821) 00:14:15.082 fused_ordering(822) 00:14:15.082 fused_ordering(823) 00:14:15.082 fused_ordering(824) 00:14:15.082 fused_ordering(825) 00:14:15.082 fused_ordering(826) 00:14:15.082 fused_ordering(827) 00:14:15.082 fused_ordering(828) 00:14:15.082 fused_ordering(829) 00:14:15.082 fused_ordering(830) 00:14:15.082 fused_ordering(831) 00:14:15.082 fused_ordering(832) 00:14:15.082 fused_ordering(833) 00:14:15.082 fused_ordering(834) 00:14:15.082 fused_ordering(835) 00:14:15.082 fused_ordering(836) 00:14:15.082 fused_ordering(837) 00:14:15.082 fused_ordering(838) 00:14:15.082 fused_ordering(839) 00:14:15.082 fused_ordering(840) 00:14:15.082 fused_ordering(841) 00:14:15.082 fused_ordering(842) 00:14:15.082 fused_ordering(843) 00:14:15.082 fused_ordering(844) 00:14:15.082 fused_ordering(845) 00:14:15.082 fused_ordering(846) 00:14:15.082 fused_ordering(847) 00:14:15.082 fused_ordering(848) 00:14:15.082 fused_ordering(849) 00:14:15.082 fused_ordering(850) 00:14:15.082 fused_ordering(851) 00:14:15.082 fused_ordering(852) 00:14:15.082 fused_ordering(853) 00:14:15.082 fused_ordering(854) 00:14:15.082 fused_ordering(855) 00:14:15.082 fused_ordering(856) 00:14:15.082 fused_ordering(857) 00:14:15.082 fused_ordering(858) 00:14:15.082 fused_ordering(859) 00:14:15.082 fused_ordering(860) 00:14:15.082 fused_ordering(861) 00:14:15.082 fused_ordering(862) 00:14:15.082 fused_ordering(863) 00:14:15.082 fused_ordering(864) 00:14:15.082 fused_ordering(865) 00:14:15.082 fused_ordering(866) 00:14:15.082 fused_ordering(867) 00:14:15.082 fused_ordering(868) 00:14:15.082 fused_ordering(869) 00:14:15.082 fused_ordering(870) 00:14:15.082 fused_ordering(871) 00:14:15.082 fused_ordering(872) 00:14:15.082 fused_ordering(873) 00:14:15.082 fused_ordering(874) 00:14:15.082 fused_ordering(875) 00:14:15.082 fused_ordering(876) 00:14:15.082 fused_ordering(877) 00:14:15.082 fused_ordering(878) 00:14:15.082 fused_ordering(879) 00:14:15.082 fused_ordering(880) 00:14:15.082 fused_ordering(881) 00:14:15.082 fused_ordering(882) 00:14:15.082 fused_ordering(883) 00:14:15.082 fused_ordering(884) 00:14:15.082 fused_ordering(885) 00:14:15.082 fused_ordering(886) 00:14:15.082 fused_ordering(887) 00:14:15.082 fused_ordering(888) 00:14:15.082 fused_ordering(889) 00:14:15.082 fused_ordering(890) 00:14:15.082 fused_ordering(891) 00:14:15.082 fused_ordering(892) 00:14:15.082 fused_ordering(893) 00:14:15.082 fused_ordering(894) 00:14:15.082 fused_ordering(895) 00:14:15.082 fused_ordering(896) 00:14:15.082 fused_ordering(897) 00:14:15.082 fused_ordering(898) 00:14:15.082 fused_ordering(899) 00:14:15.082 fused_ordering(900) 00:14:15.082 fused_ordering(901) 00:14:15.083 fused_ordering(902) 00:14:15.083 fused_ordering(903) 00:14:15.083 fused_ordering(904) 00:14:15.083 fused_ordering(905) 00:14:15.083 fused_ordering(906) 00:14:15.083 fused_ordering(907) 00:14:15.083 fused_ordering(908) 00:14:15.083 fused_ordering(909) 00:14:15.083 fused_ordering(910) 00:14:15.083 fused_ordering(911) 00:14:15.083 fused_ordering(912) 00:14:15.083 fused_ordering(913) 00:14:15.083 fused_ordering(914) 00:14:15.083 fused_ordering(915) 00:14:15.083 fused_ordering(916) 00:14:15.083 fused_ordering(917) 00:14:15.083 fused_ordering(918) 00:14:15.083 fused_ordering(919) 00:14:15.083 fused_ordering(920) 00:14:15.083 fused_ordering(921) 00:14:15.083 fused_ordering(922) 00:14:15.083 fused_ordering(923) 00:14:15.083 fused_ordering(924) 00:14:15.083 fused_ordering(925) 00:14:15.083 fused_ordering(926) 00:14:15.083 fused_ordering(927) 00:14:15.083 fused_ordering(928) 00:14:15.083 fused_ordering(929) 00:14:15.083 fused_ordering(930) 00:14:15.083 fused_ordering(931) 00:14:15.083 fused_ordering(932) 00:14:15.083 fused_ordering(933) 00:14:15.083 fused_ordering(934) 00:14:15.083 fused_ordering(935) 00:14:15.083 fused_ordering(936) 00:14:15.083 fused_ordering(937) 00:14:15.083 fused_ordering(938) 00:14:15.083 fused_ordering(939) 00:14:15.083 fused_ordering(940) 00:14:15.083 fused_ordering(941) 00:14:15.083 fused_ordering(942) 00:14:15.083 fused_ordering(943) 00:14:15.083 fused_ordering(944) 00:14:15.083 fused_ordering(945) 00:14:15.083 fused_ordering(946) 00:14:15.083 fused_ordering(947) 00:14:15.083 fused_ordering(948) 00:14:15.083 fused_ordering(949) 00:14:15.083 fused_ordering(950) 00:14:15.083 fused_ordering(951) 00:14:15.083 fused_ordering(952) 00:14:15.083 fused_ordering(953) 00:14:15.083 fused_ordering(954) 00:14:15.083 fused_ordering(955) 00:14:15.083 fused_ordering(956) 00:14:15.083 fused_ordering(957) 00:14:15.083 fused_ordering(958) 00:14:15.083 fused_ordering(959) 00:14:15.083 fused_ordering(960) 00:14:15.083 fused_ordering(961) 00:14:15.083 fused_ordering(962) 00:14:15.083 fused_ordering(963) 00:14:15.083 fused_ordering(964) 00:14:15.083 fused_ordering(965) 00:14:15.083 fused_ordering(966) 00:14:15.083 fused_ordering(967) 00:14:15.083 fused_ordering(968) 00:14:15.083 fused_ordering(969) 00:14:15.083 fused_ordering(970) 00:14:15.083 fused_ordering(971) 00:14:15.083 fused_ordering(972) 00:14:15.083 fused_ordering(973) 00:14:15.083 fused_ordering(974) 00:14:15.083 fused_ordering(975) 00:14:15.083 fused_ordering(976) 00:14:15.083 fused_ordering(977) 00:14:15.083 fused_ordering(978) 00:14:15.083 fused_ordering(979) 00:14:15.083 fused_ordering(980) 00:14:15.083 fused_ordering(981) 00:14:15.083 fused_ordering(982) 00:14:15.083 fused_ordering(983) 00:14:15.083 fused_ordering(984) 00:14:15.083 fused_ordering(985) 00:14:15.083 fused_ordering(986) 00:14:15.083 fused_ordering(987) 00:14:15.083 fused_ordering(988) 00:14:15.083 fused_ordering(989) 00:14:15.083 fused_ordering(990) 00:14:15.083 fused_ordering(991) 00:14:15.083 fused_ordering(992) 00:14:15.083 fused_ordering(993) 00:14:15.083 fused_ordering(994) 00:14:15.083 fused_ordering(995) 00:14:15.083 fused_ordering(996) 00:14:15.083 fused_ordering(997) 00:14:15.083 fused_ordering(998) 00:14:15.083 fused_ordering(999) 00:14:15.083 fused_ordering(1000) 00:14:15.083 fused_ordering(1001) 00:14:15.083 fused_ordering(1002) 00:14:15.083 fused_ordering(1003) 00:14:15.083 fused_ordering(1004) 00:14:15.083 fused_ordering(1005) 00:14:15.083 fused_ordering(1006) 00:14:15.083 fused_ordering(1007) 00:14:15.083 fused_ordering(1008) 00:14:15.083 fused_ordering(1009) 00:14:15.083 fused_ordering(1010) 00:14:15.083 fused_ordering(1011) 00:14:15.083 fused_ordering(1012) 00:14:15.083 fused_ordering(1013) 00:14:15.083 fused_ordering(1014) 00:14:15.083 fused_ordering(1015) 00:14:15.083 fused_ordering(1016) 00:14:15.083 fused_ordering(1017) 00:14:15.083 fused_ordering(1018) 00:14:15.083 fused_ordering(1019) 00:14:15.083 fused_ordering(1020) 00:14:15.083 fused_ordering(1021) 00:14:15.083 fused_ordering(1022) 00:14:15.083 fused_ordering(1023) 00:14:15.083 22:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:15.083 22:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:15.083 22:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:15.083 22:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:14:15.083 22:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:15.083 22:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:14:15.083 22:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:15.083 22:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:15.083 rmmod nvme_tcp 00:14:15.083 rmmod nvme_fabrics 00:14:15.342 rmmod nvme_keyring 00:14:15.342 22:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:15.342 22:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:14:15.342 22:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:14:15.342 22:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2650953 ']' 00:14:15.342 22:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2650953 00:14:15.342 22:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 2650953 ']' 00:14:15.342 22:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 2650953 00:14:15.342 22:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:14:15.342 22:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:15.342 22:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2650953 00:14:15.342 22:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:15.342 22:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:15.342 22:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2650953' 00:14:15.342 killing process with pid 2650953 00:14:15.342 22:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 2650953 00:14:15.342 22:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 2650953 00:14:15.601 22:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:15.601 22:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:15.601 22:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:15.601 22:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:15.601 22:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:15.601 22:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.601 22:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:15.601 22:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.502 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:17.502 00:14:17.502 real 0m12.766s 00:14:17.502 user 0m6.324s 00:14:17.502 sys 0m7.313s 00:14:17.502 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:17.502 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:17.502 ************************************ 00:14:17.502 END TEST nvmf_fused_ordering 00:14:17.502 ************************************ 00:14:17.502 22:01:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:17.502 22:01:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:17.502 22:01:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:17.502 22:01:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:17.761 ************************************ 00:14:17.761 START TEST nvmf_ns_masking 00:14:17.761 ************************************ 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:17.761 * Looking for test storage... 00:14:17.761 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=0306d0e9-9682-48dd-a05c-81dc5b1e9424 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=560285a1-930f-4bab-85d2-f059768a26c6 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=47c9d161-0045-4fa3-8266-acf7c4d47c6b 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.761 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:17.762 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.762 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:17.762 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:17.762 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:14:17.762 22:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:24.326 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:24.326 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:24.326 Found net devices under 0000:af:00.0: cvl_0_0 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:24.326 Found net devices under 0000:af:00.1: cvl_0_1 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:24.326 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:24.593 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:24.593 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:24.593 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:24.593 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:24.593 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:24.593 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:24.593 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:24.593 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:24.593 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:14:24.593 00:14:24.593 --- 10.0.0.2 ping statistics --- 00:14:24.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:24.593 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:14:24.593 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:24.593 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:24.593 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:14:24.593 00:14:24.593 --- 10.0.0.1 ping statistics --- 00:14:24.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:24.593 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:14:24.593 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:24.593 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:14:24.593 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:24.593 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:24.593 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:24.593 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:24.593 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:24.593 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:24.593 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:24.593 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:24.593 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:24.593 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:24.593 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:24.593 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2655187 00:14:24.593 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:24.593 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2655187 00:14:24.593 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 2655187 ']' 00:14:24.593 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:24.593 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:24.593 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:24.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:24.593 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:24.593 22:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:24.851 [2024-07-24 22:02:03.850694] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:14:24.851 [2024-07-24 22:02:03.850747] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:24.851 EAL: No free 2048 kB hugepages reported on node 1 00:14:24.851 [2024-07-24 22:02:03.922564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.851 [2024-07-24 22:02:03.988920] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:24.851 [2024-07-24 22:02:03.988961] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:24.851 [2024-07-24 22:02:03.988970] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:24.851 [2024-07-24 22:02:03.988979] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:24.851 [2024-07-24 22:02:03.988986] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:24.851 [2024-07-24 22:02:03.989012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.785 22:02:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:25.785 22:02:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:14:25.785 22:02:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:25.785 22:02:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:25.785 22:02:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:25.785 22:02:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:25.785 22:02:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:25.785 [2024-07-24 22:02:04.839879] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:25.785 22:02:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:25.785 22:02:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:25.786 22:02:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:26.044 Malloc1 00:14:26.044 22:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:26.044 Malloc2 00:14:26.044 22:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:26.302 22:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:26.560 22:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:26.560 [2024-07-24 22:02:05.715393] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:26.560 22:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:26.560 22:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 47c9d161-0045-4fa3-8266-acf7c4d47c6b -a 10.0.0.2 -s 4420 -i 4 00:14:26.819 22:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:26.819 22:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:26.819 22:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:26.819 22:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:26.819 22:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:28.717 22:02:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:28.717 22:02:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:28.717 22:02:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:28.717 22:02:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:28.717 22:02:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:28.717 22:02:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:28.717 22:02:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:28.717 22:02:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:28.717 22:02:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:28.717 22:02:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:28.717 22:02:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:28.717 22:02:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:28.717 22:02:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:28.717 [ 0]:0x1 00:14:28.717 22:02:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:28.975 22:02:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:28.975 22:02:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=97ada6b0c3a241e9833ae088a2c4ed4b 00:14:28.975 22:02:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 97ada6b0c3a241e9833ae088a2c4ed4b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:28.975 22:02:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:28.975 22:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:28.975 22:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:28.975 22:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:28.975 [ 0]:0x1 00:14:28.975 22:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:28.975 22:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:28.975 22:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=97ada6b0c3a241e9833ae088a2c4ed4b 00:14:28.975 22:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 97ada6b0c3a241e9833ae088a2c4ed4b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:28.975 22:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:28.975 22:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:28.975 22:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:28.975 [ 1]:0x2 00:14:28.975 22:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:28.975 22:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:29.233 22:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e3d3751e2d1a4888b6279c59430f00a5 00:14:29.233 22:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e3d3751e2d1a4888b6279c59430f00a5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:29.233 22:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:29.233 22:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:29.233 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.233 22:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.491 22:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:29.491 22:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:29.491 22:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 47c9d161-0045-4fa3-8266-acf7c4d47c6b -a 10.0.0.2 -s 4420 -i 4 00:14:29.750 22:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:29.750 22:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:29.750 22:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:29.750 22:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:14:29.750 22:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:14:29.750 22:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:31.649 22:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:31.649 22:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:31.649 22:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:31.649 22:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:31.649 22:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:31.649 22:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:31.649 22:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:31.649 22:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:31.649 22:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:31.649 22:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:31.649 22:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:31.649 22:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:31.649 22:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:31.649 22:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:31.649 22:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:31.649 22:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:31.649 22:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:31.649 22:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:31.649 22:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:31.649 22:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:31.649 22:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:31.649 22:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:31.907 22:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:31.907 22:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:31.907 22:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:31.907 22:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:31.907 22:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:31.907 22:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:31.907 22:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:31.907 22:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:31.907 22:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:31.907 [ 0]:0x2 00:14:31.907 22:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:31.907 22:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:31.907 22:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e3d3751e2d1a4888b6279c59430f00a5 00:14:31.907 22:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e3d3751e2d1a4888b6279c59430f00a5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:31.907 22:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:32.164 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:32.165 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:32.165 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:32.165 [ 0]:0x1 00:14:32.165 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:32.165 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:32.165 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=97ada6b0c3a241e9833ae088a2c4ed4b 00:14:32.165 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 97ada6b0c3a241e9833ae088a2c4ed4b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:32.165 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:32.165 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:32.165 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:32.165 [ 1]:0x2 00:14:32.165 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:32.165 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:32.165 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e3d3751e2d1a4888b6279c59430f00a5 00:14:32.165 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e3d3751e2d1a4888b6279c59430f00a5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:32.165 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:32.422 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:32.422 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:32.422 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:32.422 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:32.422 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:32.422 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:32.422 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:32.422 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:32.422 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:32.422 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:32.422 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:32.422 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:32.422 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:32.422 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:32.422 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:32.422 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:32.422 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:32.422 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:32.422 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:32.422 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:32.422 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:32.422 [ 0]:0x2 00:14:32.422 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:32.422 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:32.422 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e3d3751e2d1a4888b6279c59430f00a5 00:14:32.422 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e3d3751e2d1a4888b6279c59430f00a5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:32.422 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:32.422 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:32.422 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.422 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:32.680 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:32.680 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 47c9d161-0045-4fa3-8266-acf7c4d47c6b -a 10.0.0.2 -s 4420 -i 4 00:14:32.938 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:32.938 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:32.938 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:32.938 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:32.938 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:32.938 22:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:34.839 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:34.839 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:34.839 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:34.839 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:34.839 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:34.839 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:34.839 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:34.839 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:35.097 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:35.097 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:35.097 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:35.097 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:35.097 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:35.097 [ 0]:0x1 00:14:35.097 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:35.097 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:35.097 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=97ada6b0c3a241e9833ae088a2c4ed4b 00:14:35.097 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 97ada6b0c3a241e9833ae088a2c4ed4b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:35.097 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:35.097 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:35.097 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:35.097 [ 1]:0x2 00:14:35.097 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:35.097 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:35.097 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e3d3751e2d1a4888b6279c59430f00a5 00:14:35.097 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e3d3751e2d1a4888b6279c59430f00a5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:35.097 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:35.355 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:35.355 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:35.355 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:35.355 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:35.355 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:35.355 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:35.355 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:35.355 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:35.355 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:35.355 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:35.355 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:35.355 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:35.355 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:35.356 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:35.356 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:35.356 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:35.356 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:35.356 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:35.356 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:35.356 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:35.356 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:35.356 [ 0]:0x2 00:14:35.356 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:35.356 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:35.356 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e3d3751e2d1a4888b6279c59430f00a5 00:14:35.356 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e3d3751e2d1a4888b6279c59430f00a5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:35.356 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:35.356 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:35.356 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:35.356 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:35.356 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:35.356 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:35.356 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:35.356 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:35.356 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:35.356 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:35.356 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:35.356 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:35.614 [2024-07-24 22:02:14.673141] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:35.614 request: 00:14:35.614 { 00:14:35.614 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:35.614 "nsid": 2, 00:14:35.614 "host": "nqn.2016-06.io.spdk:host1", 00:14:35.614 "method": "nvmf_ns_remove_host", 00:14:35.614 "req_id": 1 00:14:35.614 } 00:14:35.614 Got JSON-RPC error response 00:14:35.614 response: 00:14:35.614 { 00:14:35.614 "code": -32602, 00:14:35.614 "message": "Invalid parameters" 00:14:35.614 } 00:14:35.614 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:35.614 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:35.614 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:35.614 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:35.614 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:35.614 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:35.614 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:35.614 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:35.614 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:35.614 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:35.614 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:35.614 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:35.614 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:35.614 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:35.614 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:35.614 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:35.614 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:35.614 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:35.614 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:35.614 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:35.614 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:35.615 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:35.615 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:35.615 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:35.615 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:35.615 [ 0]:0x2 00:14:35.615 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:35.615 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:35.873 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e3d3751e2d1a4888b6279c59430f00a5 00:14:35.873 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e3d3751e2d1a4888b6279c59430f00a5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:35.873 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:35.873 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:35.873 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.873 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2657198 00:14:35.873 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:35.873 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:35.873 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2657198 /var/tmp/host.sock 00:14:35.873 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 2657198 ']' 00:14:35.873 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:14:35.873 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:35.873 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:35.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:35.873 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:35.873 22:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:35.873 [2024-07-24 22:02:15.039829] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:14:35.873 [2024-07-24 22:02:15.039880] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2657198 ] 00:14:35.873 EAL: No free 2048 kB hugepages reported on node 1 00:14:36.131 [2024-07-24 22:02:15.109284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.131 [2024-07-24 22:02:15.179704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:36.697 22:02:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:36.698 22:02:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:14:36.698 22:02:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:36.956 22:02:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:36.956 22:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 0306d0e9-9682-48dd-a05c-81dc5b1e9424 00:14:36.956 22:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:14:36.956 22:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 0306D0E9968248DDA05C81DC5B1E9424 -i 00:14:37.214 22:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 560285a1-930f-4bab-85d2-f059768a26c6 00:14:37.214 22:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:14:37.214 22:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 560285A1930F4BAB85D2F059768A26C6 -i 00:14:37.487 22:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:37.487 22:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:37.762 22:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:37.762 22:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:38.020 nvme0n1 00:14:38.020 22:02:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:38.020 22:02:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:38.278 nvme1n2 00:14:38.278 22:02:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:38.278 22:02:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:38.278 22:02:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:38.278 22:02:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:38.278 22:02:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:38.536 22:02:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:38.536 22:02:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:38.536 22:02:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:38.536 22:02:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:38.536 22:02:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 0306d0e9-9682-48dd-a05c-81dc5b1e9424 == \0\3\0\6\d\0\e\9\-\9\6\8\2\-\4\8\d\d\-\a\0\5\c\-\8\1\d\c\5\b\1\e\9\4\2\4 ]] 00:14:38.536 22:02:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:38.536 22:02:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:38.536 22:02:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:38.794 22:02:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 560285a1-930f-4bab-85d2-f059768a26c6 == \5\6\0\2\8\5\a\1\-\9\3\0\f\-\4\b\a\b\-\8\5\d\2\-\f\0\5\9\7\6\8\a\2\6\c\6 ]] 00:14:38.794 22:02:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 2657198 00:14:38.794 22:02:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 2657198 ']' 00:14:38.794 22:02:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 2657198 00:14:38.794 22:02:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:38.794 22:02:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:38.794 22:02:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2657198 00:14:38.794 22:02:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:38.794 22:02:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:38.794 22:02:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2657198' 00:14:38.794 killing process with pid 2657198 00:14:38.794 22:02:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 2657198 00:14:38.794 22:02:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 2657198 00:14:39.361 22:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:39.361 22:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:14:39.361 22:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:14:39.361 22:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:39.361 22:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:14:39.361 22:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:39.361 22:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:14:39.361 22:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:39.361 22:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:39.361 rmmod nvme_tcp 00:14:39.361 rmmod nvme_fabrics 00:14:39.361 rmmod nvme_keyring 00:14:39.361 22:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:39.361 22:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:14:39.361 22:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:14:39.361 22:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2655187 ']' 00:14:39.361 22:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2655187 00:14:39.361 22:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 2655187 ']' 00:14:39.361 22:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 2655187 00:14:39.361 22:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:39.361 22:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:39.361 22:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2655187 00:14:39.361 22:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:39.361 22:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:39.361 22:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2655187' 00:14:39.361 killing process with pid 2655187 00:14:39.361 22:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 2655187 00:14:39.361 22:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 2655187 00:14:39.619 22:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:39.619 22:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:39.619 22:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:39.619 22:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:39.619 22:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:39.619 22:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:39.619 22:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:39.619 22:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:42.153 22:02:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:42.153 00:14:42.153 real 0m24.144s 00:14:42.153 user 0m23.796s 00:14:42.153 sys 0m8.187s 00:14:42.153 22:02:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:42.153 22:02:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:42.153 ************************************ 00:14:42.153 END TEST nvmf_ns_masking 00:14:42.153 ************************************ 00:14:42.153 22:02:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:42.153 22:02:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:42.153 22:02:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:42.153 22:02:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:42.153 22:02:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:42.153 ************************************ 00:14:42.153 START TEST nvmf_nvme_cli 00:14:42.153 ************************************ 00:14:42.153 22:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:42.153 * Looking for test storage... 00:14:42.153 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:42.153 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:42.153 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:42.153 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:42.153 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:42.153 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:42.153 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:42.153 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:42.153 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:42.153 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:42.153 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:42.153 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:42.153 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:42.153 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:14:42.153 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:14:42.153 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:42.153 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:42.153 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:42.153 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:42.153 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:42.153 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:42.153 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:42.153 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:42.153 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.153 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.153 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.153 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:42.154 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.154 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:14:42.154 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:42.154 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:42.154 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:42.154 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:42.154 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:42.154 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:42.154 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:42.154 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:42.154 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:42.154 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:42.154 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:42.154 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:42.154 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:42.154 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:42.154 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:42.154 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:42.154 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:42.154 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:42.154 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:42.154 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:42.154 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:42.154 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:42.154 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:14:42.154 22:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:48.719 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:48.719 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:48.719 Found net devices under 0000:af:00.0: cvl_0_0 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:48.719 Found net devices under 0000:af:00.1: cvl_0_1 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:48.719 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:48.720 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:48.720 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:48.720 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:48.720 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:48.720 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:48.720 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:48.720 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:48.720 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:48.720 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:48.720 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:48.720 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:48.720 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:48.720 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:14:48.720 00:14:48.720 --- 10.0.0.2 ping statistics --- 00:14:48.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.720 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:14:48.720 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:48.720 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:48.720 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:14:48.720 00:14:48.720 --- 10.0.0.1 ping statistics --- 00:14:48.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.720 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:14:48.720 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:48.979 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:14:48.979 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:48.979 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:48.979 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:48.979 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:48.979 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:48.979 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:48.979 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:48.979 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:48.979 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:48.979 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:48.979 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:48.979 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2661437 00:14:48.979 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:48.979 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2661437 00:14:48.979 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 2661437 ']' 00:14:48.979 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.979 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:48.979 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.979 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:48.979 22:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:48.979 [2024-07-24 22:02:28.015948] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:14:48.979 [2024-07-24 22:02:28.015996] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:48.979 EAL: No free 2048 kB hugepages reported on node 1 00:14:48.979 [2024-07-24 22:02:28.090418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:48.979 [2024-07-24 22:02:28.166329] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:48.979 [2024-07-24 22:02:28.166366] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:48.979 [2024-07-24 22:02:28.166376] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:48.979 [2024-07-24 22:02:28.166385] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:48.979 [2024-07-24 22:02:28.166391] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:48.979 [2024-07-24 22:02:28.166431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:48.979 [2024-07-24 22:02:28.166525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:48.979 [2024-07-24 22:02:28.166611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:48.979 [2024-07-24 22:02:28.166613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.914 22:02:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:49.914 22:02:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:14:49.914 22:02:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:49.914 22:02:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:49.914 22:02:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.914 22:02:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:49.914 22:02:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:49.914 22:02:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.914 22:02:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.914 [2024-07-24 22:02:28.878149] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:49.914 22:02:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.914 22:02:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:49.914 22:02:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.914 22:02:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.914 Malloc0 00:14:49.914 22:02:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.914 22:02:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:49.914 22:02:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.914 22:02:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.914 Malloc1 00:14:49.914 22:02:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.914 22:02:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:49.914 22:02:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.914 22:02:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.914 22:02:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.914 22:02:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:49.914 22:02:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.914 22:02:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.914 22:02:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.914 22:02:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:49.914 22:02:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.914 22:02:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.914 22:02:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.914 22:02:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:49.914 22:02:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.914 22:02:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.914 [2024-07-24 22:02:28.962518] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:49.914 22:02:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.914 22:02:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:49.914 22:02:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.914 22:02:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.914 22:02:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.914 22:02:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 4420 00:14:50.172 00:14:50.172 Discovery Log Number of Records 2, Generation counter 2 00:14:50.172 =====Discovery Log Entry 0====== 00:14:50.172 trtype: tcp 00:14:50.172 adrfam: ipv4 00:14:50.172 subtype: current discovery subsystem 00:14:50.172 treq: not required 00:14:50.172 portid: 0 00:14:50.172 trsvcid: 4420 00:14:50.172 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:50.172 traddr: 10.0.0.2 00:14:50.172 eflags: explicit discovery connections, duplicate discovery information 00:14:50.172 sectype: none 00:14:50.172 =====Discovery Log Entry 1====== 00:14:50.172 trtype: tcp 00:14:50.172 adrfam: ipv4 00:14:50.172 subtype: nvme subsystem 00:14:50.172 treq: not required 00:14:50.172 portid: 0 00:14:50.172 trsvcid: 4420 00:14:50.173 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:50.173 traddr: 10.0.0.2 00:14:50.173 eflags: none 00:14:50.173 sectype: none 00:14:50.173 22:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:50.173 22:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:50.173 22:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:50.173 22:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:50.173 22:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:50.173 22:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:50.173 22:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:50.173 22:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:50.173 22:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:50.173 22:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:50.173 22:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:51.545 22:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:51.545 22:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:14:51.545 22:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:51.545 22:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:51.545 22:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:51.545 22:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:14:53.444 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:53.444 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:53.444 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:53.444 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:53.444 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:53.444 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:14:53.444 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:53.444 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:53.444 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:53.444 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:53.444 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:53.444 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:53.444 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:53.444 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:53.444 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:53.444 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:53.444 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:53.444 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:53.444 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:53.444 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:53.444 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:14:53.444 /dev/nvme0n1 ]] 00:14:53.444 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:53.444 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:53.444 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:53.444 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:53.444 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:53.444 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:53.444 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:53.444 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:53.444 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:53.444 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:53.444 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:53.444 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:53.444 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:53.444 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:53.444 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:53.444 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:53.444 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:53.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.703 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:53.703 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:14:53.703 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:53.704 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:53.704 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:53.704 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:53.704 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:14:53.704 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:53.704 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:53.704 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.704 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:53.704 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.704 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:53.704 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:53.704 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:53.704 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:14:53.704 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:53.704 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:14:53.704 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:53.704 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:53.704 rmmod nvme_tcp 00:14:53.704 rmmod nvme_fabrics 00:14:53.704 rmmod nvme_keyring 00:14:53.704 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:53.704 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:14:53.704 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:14:53.704 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2661437 ']' 00:14:53.704 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2661437 00:14:53.704 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 2661437 ']' 00:14:53.704 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 2661437 00:14:53.704 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:14:53.704 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:53.704 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2661437 00:14:53.704 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:53.704 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:53.704 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2661437' 00:14:53.704 killing process with pid 2661437 00:14:53.704 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 2661437 00:14:53.704 22:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 2661437 00:14:53.962 22:02:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:53.962 22:02:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:53.962 22:02:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:53.962 22:02:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:53.962 22:02:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:53.962 22:02:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.962 22:02:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:53.962 22:02:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.516 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:56.516 00:14:56.516 real 0m14.215s 00:14:56.516 user 0m21.299s 00:14:56.516 sys 0m6.088s 00:14:56.516 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:56.516 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:56.516 ************************************ 00:14:56.516 END TEST nvmf_nvme_cli 00:14:56.516 ************************************ 00:14:56.516 22:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:56.516 22:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:56.516 22:02:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:56.516 22:02:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:56.516 22:02:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:56.516 ************************************ 00:14:56.516 START TEST nvmf_vfio_user 00:14:56.516 ************************************ 00:14:56.516 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:56.516 * Looking for test storage... 00:14:56.516 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:56.516 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:56.516 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:56.516 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:56.516 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:56.516 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:56.516 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:56.516 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:56.516 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:56.516 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:56.516 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:56.516 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:56.516 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:56.516 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:14:56.516 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:14:56.516 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:56.516 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:56.516 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:56.516 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:56.516 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:56.517 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:56.517 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:56.517 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:56.517 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.517 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.517 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.517 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:56.517 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.517 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:14:56.517 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:56.517 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:56.517 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:56.517 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:56.517 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:56.517 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:56.517 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:56.517 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:56.517 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:56.517 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:56.517 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:56.517 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:56.517 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:56.517 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:56.517 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:56.517 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:56.517 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:56.517 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:56.517 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2662887 00:14:56.517 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2662887' 00:14:56.517 Process pid: 2662887 00:14:56.517 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:56.517 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2662887 00:14:56.517 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:56.517 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 2662887 ']' 00:14:56.517 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.517 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:56.517 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.517 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:56.517 22:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:56.517 [2024-07-24 22:02:35.453888] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:14:56.517 [2024-07-24 22:02:35.453935] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.517 EAL: No free 2048 kB hugepages reported on node 1 00:14:56.517 [2024-07-24 22:02:35.521297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:56.517 [2024-07-24 22:02:35.590260] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:56.517 [2024-07-24 22:02:35.590303] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:56.517 [2024-07-24 22:02:35.590312] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:56.517 [2024-07-24 22:02:35.590320] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:56.517 [2024-07-24 22:02:35.590327] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:56.517 [2024-07-24 22:02:35.590424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:56.517 [2024-07-24 22:02:35.590520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:56.517 [2024-07-24 22:02:35.590604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:56.517 [2024-07-24 22:02:35.590606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.081 22:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:57.081 22:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:14:57.081 22:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:58.455 22:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:58.455 22:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:58.455 22:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:58.455 22:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:58.455 22:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:58.455 22:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:58.455 Malloc1 00:14:58.455 22:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:58.713 22:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:58.969 22:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:59.228 22:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:59.228 22:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:59.228 22:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:59.228 Malloc2 00:14:59.228 22:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:59.486 22:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:59.744 22:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:59.744 22:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:59.744 22:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:59.744 22:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:59.744 22:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:59.744 22:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:59.744 22:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:00.004 [2024-07-24 22:02:38.961513] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:15:00.004 [2024-07-24 22:02:38.961552] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2663444 ] 00:15:00.004 EAL: No free 2048 kB hugepages reported on node 1 00:15:00.004 [2024-07-24 22:02:38.991064] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:00.004 [2024-07-24 22:02:39.002222] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:00.004 [2024-07-24 22:02:39.002241] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fe1fbed3000 00:15:00.004 [2024-07-24 22:02:39.003220] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:00.004 [2024-07-24 22:02:39.004219] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:00.004 [2024-07-24 22:02:39.005227] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:00.004 [2024-07-24 22:02:39.006234] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:00.004 [2024-07-24 22:02:39.007241] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:00.004 [2024-07-24 22:02:39.008247] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:00.004 [2024-07-24 22:02:39.009255] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:00.004 [2024-07-24 22:02:39.010263] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:00.004 [2024-07-24 22:02:39.011267] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:00.004 [2024-07-24 22:02:39.011278] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fe1fbec8000 00:15:00.004 [2024-07-24 22:02:39.012170] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:00.004 [2024-07-24 22:02:39.023797] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:00.004 [2024-07-24 22:02:39.023824] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:00.004 [2024-07-24 22:02:39.029378] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:00.004 [2024-07-24 22:02:39.029417] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:00.004 [2024-07-24 22:02:39.029488] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:00.004 [2024-07-24 22:02:39.029506] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:00.004 [2024-07-24 22:02:39.029512] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:00.004 [2024-07-24 22:02:39.030392] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:00.004 [2024-07-24 22:02:39.030405] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:00.004 [2024-07-24 22:02:39.030414] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:00.004 [2024-07-24 22:02:39.031381] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:00.004 [2024-07-24 22:02:39.031394] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:00.004 [2024-07-24 22:02:39.031403] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:00.004 [2024-07-24 22:02:39.032385] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:00.004 [2024-07-24 22:02:39.032395] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:00.004 [2024-07-24 22:02:39.033393] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:00.005 [2024-07-24 22:02:39.033402] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:00.005 [2024-07-24 22:02:39.033408] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:00.005 [2024-07-24 22:02:39.033416] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:00.005 [2024-07-24 22:02:39.033523] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:00.005 [2024-07-24 22:02:39.033529] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:00.005 [2024-07-24 22:02:39.033536] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:00.005 [2024-07-24 22:02:39.034394] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:00.005 [2024-07-24 22:02:39.035399] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:00.005 [2024-07-24 22:02:39.036408] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:00.005 [2024-07-24 22:02:39.037403] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:00.005 [2024-07-24 22:02:39.037485] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:00.005 [2024-07-24 22:02:39.038416] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:00.005 [2024-07-24 22:02:39.038426] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:00.005 [2024-07-24 22:02:39.038432] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:00.005 [2024-07-24 22:02:39.038451] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:00.005 [2024-07-24 22:02:39.038463] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:00.005 [2024-07-24 22:02:39.038479] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:00.005 [2024-07-24 22:02:39.038485] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:00.005 [2024-07-24 22:02:39.038490] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:00.005 [2024-07-24 22:02:39.038504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:00.005 [2024-07-24 22:02:39.038549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:00.005 [2024-07-24 22:02:39.038559] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:00.005 [2024-07-24 22:02:39.038565] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:00.005 [2024-07-24 22:02:39.038571] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:00.005 [2024-07-24 22:02:39.038576] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:00.005 [2024-07-24 22:02:39.038582] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:00.005 [2024-07-24 22:02:39.038588] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:00.005 [2024-07-24 22:02:39.038594] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:00.005 [2024-07-24 22:02:39.038603] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:00.005 [2024-07-24 22:02:39.038616] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:00.005 [2024-07-24 22:02:39.038629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:00.005 [2024-07-24 22:02:39.038643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:00.005 [2024-07-24 22:02:39.038653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:00.005 [2024-07-24 22:02:39.038662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:00.005 [2024-07-24 22:02:39.038670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:00.005 [2024-07-24 22:02:39.038676] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:00.005 [2024-07-24 22:02:39.038687] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:00.005 [2024-07-24 22:02:39.038697] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:00.005 [2024-07-24 22:02:39.038705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:00.005 [2024-07-24 22:02:39.038712] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:00.005 [2024-07-24 22:02:39.038723] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:00.005 [2024-07-24 22:02:39.038734] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:00.005 [2024-07-24 22:02:39.038741] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:00.005 [2024-07-24 22:02:39.038750] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:00.005 [2024-07-24 22:02:39.038762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:00.005 [2024-07-24 22:02:39.038813] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:00.005 [2024-07-24 22:02:39.038823] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:00.005 [2024-07-24 22:02:39.038831] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:00.005 [2024-07-24 22:02:39.038837] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:00.005 [2024-07-24 22:02:39.038842] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:00.005 [2024-07-24 22:02:39.038849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:00.005 [2024-07-24 22:02:39.038866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:00.005 [2024-07-24 22:02:39.038876] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:00.005 [2024-07-24 22:02:39.038886] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:00.005 [2024-07-24 22:02:39.038895] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:00.005 [2024-07-24 22:02:39.038903] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:00.005 [2024-07-24 22:02:39.038908] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:00.005 [2024-07-24 22:02:39.038913] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:00.005 [2024-07-24 22:02:39.038919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:00.005 [2024-07-24 22:02:39.038939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:00.005 [2024-07-24 22:02:39.038952] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:00.005 [2024-07-24 22:02:39.038961] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:00.005 [2024-07-24 22:02:39.038969] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:00.005 [2024-07-24 22:02:39.038974] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:00.005 [2024-07-24 22:02:39.038979] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:00.005 [2024-07-24 22:02:39.038986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:00.005 [2024-07-24 22:02:39.038996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:00.005 [2024-07-24 22:02:39.039005] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:00.005 [2024-07-24 22:02:39.039013] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:00.005 [2024-07-24 22:02:39.039022] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:00.005 [2024-07-24 22:02:39.039031] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:15:00.005 [2024-07-24 22:02:39.039039] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:00.005 [2024-07-24 22:02:39.039045] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:00.005 [2024-07-24 22:02:39.039051] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:00.005 [2024-07-24 22:02:39.039057] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:00.005 [2024-07-24 22:02:39.039063] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:00.005 [2024-07-24 22:02:39.039082] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:00.005 [2024-07-24 22:02:39.039092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:00.005 [2024-07-24 22:02:39.039106] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:00.006 [2024-07-24 22:02:39.039113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:00.006 [2024-07-24 22:02:39.039126] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:00.006 [2024-07-24 22:02:39.039137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:00.006 [2024-07-24 22:02:39.039151] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:00.006 [2024-07-24 22:02:39.039162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:00.006 [2024-07-24 22:02:39.039177] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:00.006 [2024-07-24 22:02:39.039183] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:00.006 [2024-07-24 22:02:39.039187] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:00.006 [2024-07-24 22:02:39.039192] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:00.006 [2024-07-24 22:02:39.039196] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:00.006 [2024-07-24 22:02:39.039203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:00.006 [2024-07-24 22:02:39.039211] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:00.006 [2024-07-24 22:02:39.039217] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:00.006 [2024-07-24 22:02:39.039221] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:00.006 [2024-07-24 22:02:39.039228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:00.006 [2024-07-24 22:02:39.039235] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:00.006 [2024-07-24 22:02:39.039241] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:00.006 [2024-07-24 22:02:39.039246] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:00.006 [2024-07-24 22:02:39.039252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:00.006 [2024-07-24 22:02:39.039262] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:00.006 [2024-07-24 22:02:39.039268] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:00.006 [2024-07-24 22:02:39.039272] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:00.006 [2024-07-24 22:02:39.039278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:00.006 [2024-07-24 22:02:39.039286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:00.006 [2024-07-24 22:02:39.039302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:00.006 [2024-07-24 22:02:39.039315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:00.006 [2024-07-24 22:02:39.039323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:00.006 ===================================================== 00:15:00.006 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:00.006 ===================================================== 00:15:00.006 Controller Capabilities/Features 00:15:00.006 ================================ 00:15:00.006 Vendor ID: 4e58 00:15:00.006 Subsystem Vendor ID: 4e58 00:15:00.006 Serial Number: SPDK1 00:15:00.006 Model Number: SPDK bdev Controller 00:15:00.006 Firmware Version: 24.09 00:15:00.006 Recommended Arb Burst: 6 00:15:00.006 IEEE OUI Identifier: 8d 6b 50 00:15:00.006 Multi-path I/O 00:15:00.006 May have multiple subsystem ports: Yes 00:15:00.006 May have multiple controllers: Yes 00:15:00.006 Associated with SR-IOV VF: No 00:15:00.006 Max Data Transfer Size: 131072 00:15:00.006 Max Number of Namespaces: 32 00:15:00.006 Max Number of I/O Queues: 127 00:15:00.006 NVMe Specification Version (VS): 1.3 00:15:00.006 NVMe Specification Version (Identify): 1.3 00:15:00.006 Maximum Queue Entries: 256 00:15:00.006 Contiguous Queues Required: Yes 00:15:00.006 Arbitration Mechanisms Supported 00:15:00.006 Weighted Round Robin: Not Supported 00:15:00.006 Vendor Specific: Not Supported 00:15:00.006 Reset Timeout: 15000 ms 00:15:00.006 Doorbell Stride: 4 bytes 00:15:00.006 NVM Subsystem Reset: Not Supported 00:15:00.006 Command Sets Supported 00:15:00.006 NVM Command Set: Supported 00:15:00.006 Boot Partition: Not Supported 00:15:00.006 Memory Page Size Minimum: 4096 bytes 00:15:00.006 Memory Page Size Maximum: 4096 bytes 00:15:00.006 Persistent Memory Region: Not Supported 00:15:00.006 Optional Asynchronous Events Supported 00:15:00.006 Namespace Attribute Notices: Supported 00:15:00.006 Firmware Activation Notices: Not Supported 00:15:00.006 ANA Change Notices: Not Supported 00:15:00.006 PLE Aggregate Log Change Notices: Not Supported 00:15:00.006 LBA Status Info Alert Notices: Not Supported 00:15:00.006 EGE Aggregate Log Change Notices: Not Supported 00:15:00.006 Normal NVM Subsystem Shutdown event: Not Supported 00:15:00.006 Zone Descriptor Change Notices: Not Supported 00:15:00.006 Discovery Log Change Notices: Not Supported 00:15:00.006 Controller Attributes 00:15:00.006 128-bit Host Identifier: Supported 00:15:00.006 Non-Operational Permissive Mode: Not Supported 00:15:00.006 NVM Sets: Not Supported 00:15:00.006 Read Recovery Levels: Not Supported 00:15:00.006 Endurance Groups: Not Supported 00:15:00.006 Predictable Latency Mode: Not Supported 00:15:00.006 Traffic Based Keep ALive: Not Supported 00:15:00.006 Namespace Granularity: Not Supported 00:15:00.006 SQ Associations: Not Supported 00:15:00.006 UUID List: Not Supported 00:15:00.006 Multi-Domain Subsystem: Not Supported 00:15:00.006 Fixed Capacity Management: Not Supported 00:15:00.006 Variable Capacity Management: Not Supported 00:15:00.006 Delete Endurance Group: Not Supported 00:15:00.006 Delete NVM Set: Not Supported 00:15:00.006 Extended LBA Formats Supported: Not Supported 00:15:00.006 Flexible Data Placement Supported: Not Supported 00:15:00.006 00:15:00.006 Controller Memory Buffer Support 00:15:00.006 ================================ 00:15:00.006 Supported: No 00:15:00.006 00:15:00.006 Persistent Memory Region Support 00:15:00.006 ================================ 00:15:00.006 Supported: No 00:15:00.006 00:15:00.006 Admin Command Set Attributes 00:15:00.006 ============================ 00:15:00.006 Security Send/Receive: Not Supported 00:15:00.006 Format NVM: Not Supported 00:15:00.006 Firmware Activate/Download: Not Supported 00:15:00.006 Namespace Management: Not Supported 00:15:00.006 Device Self-Test: Not Supported 00:15:00.006 Directives: Not Supported 00:15:00.006 NVMe-MI: Not Supported 00:15:00.006 Virtualization Management: Not Supported 00:15:00.006 Doorbell Buffer Config: Not Supported 00:15:00.006 Get LBA Status Capability: Not Supported 00:15:00.006 Command & Feature Lockdown Capability: Not Supported 00:15:00.006 Abort Command Limit: 4 00:15:00.006 Async Event Request Limit: 4 00:15:00.006 Number of Firmware Slots: N/A 00:15:00.006 Firmware Slot 1 Read-Only: N/A 00:15:00.006 Firmware Activation Without Reset: N/A 00:15:00.006 Multiple Update Detection Support: N/A 00:15:00.006 Firmware Update Granularity: No Information Provided 00:15:00.006 Per-Namespace SMART Log: No 00:15:00.006 Asymmetric Namespace Access Log Page: Not Supported 00:15:00.006 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:00.006 Command Effects Log Page: Supported 00:15:00.006 Get Log Page Extended Data: Supported 00:15:00.006 Telemetry Log Pages: Not Supported 00:15:00.006 Persistent Event Log Pages: Not Supported 00:15:00.006 Supported Log Pages Log Page: May Support 00:15:00.006 Commands Supported & Effects Log Page: Not Supported 00:15:00.006 Feature Identifiers & Effects Log Page:May Support 00:15:00.006 NVMe-MI Commands & Effects Log Page: May Support 00:15:00.006 Data Area 4 for Telemetry Log: Not Supported 00:15:00.006 Error Log Page Entries Supported: 128 00:15:00.006 Keep Alive: Supported 00:15:00.006 Keep Alive Granularity: 10000 ms 00:15:00.006 00:15:00.006 NVM Command Set Attributes 00:15:00.006 ========================== 00:15:00.006 Submission Queue Entry Size 00:15:00.006 Max: 64 00:15:00.006 Min: 64 00:15:00.006 Completion Queue Entry Size 00:15:00.006 Max: 16 00:15:00.006 Min: 16 00:15:00.006 Number of Namespaces: 32 00:15:00.006 Compare Command: Supported 00:15:00.006 Write Uncorrectable Command: Not Supported 00:15:00.006 Dataset Management Command: Supported 00:15:00.006 Write Zeroes Command: Supported 00:15:00.006 Set Features Save Field: Not Supported 00:15:00.006 Reservations: Not Supported 00:15:00.006 Timestamp: Not Supported 00:15:00.006 Copy: Supported 00:15:00.006 Volatile Write Cache: Present 00:15:00.006 Atomic Write Unit (Normal): 1 00:15:00.006 Atomic Write Unit (PFail): 1 00:15:00.006 Atomic Compare & Write Unit: 1 00:15:00.007 Fused Compare & Write: Supported 00:15:00.007 Scatter-Gather List 00:15:00.007 SGL Command Set: Supported (Dword aligned) 00:15:00.007 SGL Keyed: Not Supported 00:15:00.007 SGL Bit Bucket Descriptor: Not Supported 00:15:00.007 SGL Metadata Pointer: Not Supported 00:15:00.007 Oversized SGL: Not Supported 00:15:00.007 SGL Metadata Address: Not Supported 00:15:00.007 SGL Offset: Not Supported 00:15:00.007 Transport SGL Data Block: Not Supported 00:15:00.007 Replay Protected Memory Block: Not Supported 00:15:00.007 00:15:00.007 Firmware Slot Information 00:15:00.007 ========================= 00:15:00.007 Active slot: 1 00:15:00.007 Slot 1 Firmware Revision: 24.09 00:15:00.007 00:15:00.007 00:15:00.007 Commands Supported and Effects 00:15:00.007 ============================== 00:15:00.007 Admin Commands 00:15:00.007 -------------- 00:15:00.007 Get Log Page (02h): Supported 00:15:00.007 Identify (06h): Supported 00:15:00.007 Abort (08h): Supported 00:15:00.007 Set Features (09h): Supported 00:15:00.007 Get Features (0Ah): Supported 00:15:00.007 Asynchronous Event Request (0Ch): Supported 00:15:00.007 Keep Alive (18h): Supported 00:15:00.007 I/O Commands 00:15:00.007 ------------ 00:15:00.007 Flush (00h): Supported LBA-Change 00:15:00.007 Write (01h): Supported LBA-Change 00:15:00.007 Read (02h): Supported 00:15:00.007 Compare (05h): Supported 00:15:00.007 Write Zeroes (08h): Supported LBA-Change 00:15:00.007 Dataset Management (09h): Supported LBA-Change 00:15:00.007 Copy (19h): Supported LBA-Change 00:15:00.007 00:15:00.007 Error Log 00:15:00.007 ========= 00:15:00.007 00:15:00.007 Arbitration 00:15:00.007 =========== 00:15:00.007 Arbitration Burst: 1 00:15:00.007 00:15:00.007 Power Management 00:15:00.007 ================ 00:15:00.007 Number of Power States: 1 00:15:00.007 Current Power State: Power State #0 00:15:00.007 Power State #0: 00:15:00.007 Max Power: 0.00 W 00:15:00.007 Non-Operational State: Operational 00:15:00.007 Entry Latency: Not Reported 00:15:00.007 Exit Latency: Not Reported 00:15:00.007 Relative Read Throughput: 0 00:15:00.007 Relative Read Latency: 0 00:15:00.007 Relative Write Throughput: 0 00:15:00.007 Relative Write Latency: 0 00:15:00.007 Idle Power: Not Reported 00:15:00.007 Active Power: Not Reported 00:15:00.007 Non-Operational Permissive Mode: Not Supported 00:15:00.007 00:15:00.007 Health Information 00:15:00.007 ================== 00:15:00.007 Critical Warnings: 00:15:00.007 Available Spare Space: OK 00:15:00.007 Temperature: OK 00:15:00.007 Device Reliability: OK 00:15:00.007 Read Only: No 00:15:00.007 Volatile Memory Backup: OK 00:15:00.007 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:00.007 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:00.007 Available Spare: 0% 00:15:00.007 Available Sp[2024-07-24 22:02:39.039417] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:00.007 [2024-07-24 22:02:39.039426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:00.007 [2024-07-24 22:02:39.039455] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:00.007 [2024-07-24 22:02:39.039465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:00.007 [2024-07-24 22:02:39.039473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:00.007 [2024-07-24 22:02:39.039480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:00.007 [2024-07-24 22:02:39.039488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:00.007 [2024-07-24 22:02:39.040427] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:00.007 [2024-07-24 22:02:39.040440] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:00.007 [2024-07-24 22:02:39.041431] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:00.007 [2024-07-24 22:02:39.041483] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:00.007 [2024-07-24 22:02:39.041491] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:00.007 [2024-07-24 22:02:39.042441] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:00.007 [2024-07-24 22:02:39.042454] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:15:00.007 [2024-07-24 22:02:39.042501] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:00.007 [2024-07-24 22:02:39.047721] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:00.007 are Threshold: 0% 00:15:00.007 Life Percentage Used: 0% 00:15:00.007 Data Units Read: 0 00:15:00.007 Data Units Written: 0 00:15:00.007 Host Read Commands: 0 00:15:00.007 Host Write Commands: 0 00:15:00.007 Controller Busy Time: 0 minutes 00:15:00.007 Power Cycles: 0 00:15:00.007 Power On Hours: 0 hours 00:15:00.007 Unsafe Shutdowns: 0 00:15:00.007 Unrecoverable Media Errors: 0 00:15:00.007 Lifetime Error Log Entries: 0 00:15:00.007 Warning Temperature Time: 0 minutes 00:15:00.007 Critical Temperature Time: 0 minutes 00:15:00.007 00:15:00.007 Number of Queues 00:15:00.007 ================ 00:15:00.007 Number of I/O Submission Queues: 127 00:15:00.007 Number of I/O Completion Queues: 127 00:15:00.007 00:15:00.007 Active Namespaces 00:15:00.007 ================= 00:15:00.007 Namespace ID:1 00:15:00.007 Error Recovery Timeout: Unlimited 00:15:00.007 Command Set Identifier: NVM (00h) 00:15:00.007 Deallocate: Supported 00:15:00.007 Deallocated/Unwritten Error: Not Supported 00:15:00.007 Deallocated Read Value: Unknown 00:15:00.007 Deallocate in Write Zeroes: Not Supported 00:15:00.007 Deallocated Guard Field: 0xFFFF 00:15:00.007 Flush: Supported 00:15:00.007 Reservation: Supported 00:15:00.007 Namespace Sharing Capabilities: Multiple Controllers 00:15:00.007 Size (in LBAs): 131072 (0GiB) 00:15:00.007 Capacity (in LBAs): 131072 (0GiB) 00:15:00.007 Utilization (in LBAs): 131072 (0GiB) 00:15:00.007 NGUID: 81DC1E028DB34AC98682C8A84ADAD321 00:15:00.007 UUID: 81dc1e02-8db3-4ac9-8682-c8a84adad321 00:15:00.007 Thin Provisioning: Not Supported 00:15:00.007 Per-NS Atomic Units: Yes 00:15:00.007 Atomic Boundary Size (Normal): 0 00:15:00.007 Atomic Boundary Size (PFail): 0 00:15:00.007 Atomic Boundary Offset: 0 00:15:00.007 Maximum Single Source Range Length: 65535 00:15:00.007 Maximum Copy Length: 65535 00:15:00.007 Maximum Source Range Count: 1 00:15:00.007 NGUID/EUI64 Never Reused: No 00:15:00.007 Namespace Write Protected: No 00:15:00.007 Number of LBA Formats: 1 00:15:00.007 Current LBA Format: LBA Format #00 00:15:00.007 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:00.007 00:15:00.007 22:02:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:00.007 EAL: No free 2048 kB hugepages reported on node 1 00:15:00.266 [2024-07-24 22:02:39.263497] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:05.528 Initializing NVMe Controllers 00:15:05.528 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:05.528 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:05.528 Initialization complete. Launching workers. 00:15:05.528 ======================================================== 00:15:05.528 Latency(us) 00:15:05.528 Device Information : IOPS MiB/s Average min max 00:15:05.528 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39924.22 155.95 3205.68 906.88 10679.25 00:15:05.528 ======================================================== 00:15:05.528 Total : 39924.22 155.95 3205.68 906.88 10679.25 00:15:05.528 00:15:05.528 [2024-07-24 22:02:44.283015] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:05.528 22:02:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:05.528 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.528 [2024-07-24 22:02:44.502045] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:10.790 Initializing NVMe Controllers 00:15:10.790 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:10.790 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:10.790 Initialization complete. Launching workers. 00:15:10.790 ======================================================== 00:15:10.790 Latency(us) 00:15:10.790 Device Information : IOPS MiB/s Average min max 00:15:10.790 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16057.25 62.72 7976.84 5983.02 9975.92 00:15:10.790 ======================================================== 00:15:10.790 Total : 16057.25 62.72 7976.84 5983.02 9975.92 00:15:10.790 00:15:10.790 [2024-07-24 22:02:49.542726] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:10.790 22:02:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:10.790 EAL: No free 2048 kB hugepages reported on node 1 00:15:10.790 [2024-07-24 22:02:49.767719] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:16.056 [2024-07-24 22:02:54.846059] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:16.056 Initializing NVMe Controllers 00:15:16.056 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:16.056 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:16.056 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:16.056 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:16.056 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:16.056 Initialization complete. Launching workers. 00:15:16.056 Starting thread on core 2 00:15:16.056 Starting thread on core 3 00:15:16.056 Starting thread on core 1 00:15:16.056 22:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:16.056 EAL: No free 2048 kB hugepages reported on node 1 00:15:16.056 [2024-07-24 22:02:55.147090] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:19.372 [2024-07-24 22:02:58.498980] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:19.372 Initializing NVMe Controllers 00:15:19.372 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:19.372 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:19.372 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:19.372 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:19.372 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:19.372 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:19.372 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:19.372 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:19.372 Initialization complete. Launching workers. 00:15:19.372 Starting thread on core 1 with urgent priority queue 00:15:19.372 Starting thread on core 2 with urgent priority queue 00:15:19.372 Starting thread on core 3 with urgent priority queue 00:15:19.372 Starting thread on core 0 with urgent priority queue 00:15:19.372 SPDK bdev Controller (SPDK1 ) core 0: 4907.00 IO/s 20.38 secs/100000 ios 00:15:19.372 SPDK bdev Controller (SPDK1 ) core 1: 5994.33 IO/s 16.68 secs/100000 ios 00:15:19.372 SPDK bdev Controller (SPDK1 ) core 2: 6243.33 IO/s 16.02 secs/100000 ios 00:15:19.372 SPDK bdev Controller (SPDK1 ) core 3: 4869.67 IO/s 20.54 secs/100000 ios 00:15:19.372 ======================================================== 00:15:19.372 00:15:19.372 22:02:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:19.631 EAL: No free 2048 kB hugepages reported on node 1 00:15:19.631 [2024-07-24 22:02:58.788127] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:19.631 Initializing NVMe Controllers 00:15:19.631 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:19.631 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:19.631 Namespace ID: 1 size: 0GB 00:15:19.631 Initialization complete. 00:15:19.631 INFO: using host memory buffer for IO 00:15:19.631 Hello world! 00:15:19.631 [2024-07-24 22:02:58.822485] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:19.889 22:02:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:19.889 EAL: No free 2048 kB hugepages reported on node 1 00:15:20.147 [2024-07-24 22:02:59.103168] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:21.081 Initializing NVMe Controllers 00:15:21.081 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:21.081 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:21.081 Initialization complete. Launching workers. 00:15:21.081 submit (in ns) avg, min, max = 6392.8, 3103.2, 4996894.4 00:15:21.081 complete (in ns) avg, min, max = 20336.7, 1667.2, 4064416.8 00:15:21.081 00:15:21.081 Submit histogram 00:15:21.081 ================ 00:15:21.081 Range in us Cumulative Count 00:15:21.081 3.098 - 3.110: 0.0178% ( 3) 00:15:21.081 3.110 - 3.123: 0.1835% ( 28) 00:15:21.081 3.123 - 3.136: 0.7517% ( 96) 00:15:21.081 3.136 - 3.149: 1.7993% ( 177) 00:15:21.081 3.149 - 3.162: 3.2258% ( 241) 00:15:21.081 3.162 - 3.174: 5.1139% ( 319) 00:15:21.081 3.174 - 3.187: 7.6472% ( 428) 00:15:21.081 3.187 - 3.200: 10.9914% ( 565) 00:15:21.081 3.200 - 3.213: 15.0340% ( 683) 00:15:21.081 3.213 - 3.226: 19.4140% ( 740) 00:15:21.081 3.226 - 3.238: 24.8713% ( 922) 00:15:21.081 3.238 - 3.251: 30.9796% ( 1032) 00:15:21.081 3.251 - 3.264: 36.4842% ( 930) 00:15:21.081 3.264 - 3.277: 41.9592% ( 925) 00:15:21.081 3.277 - 3.302: 53.3827% ( 1930) 00:15:21.081 3.302 - 3.328: 62.6221% ( 1561) 00:15:21.081 3.328 - 3.354: 70.1924% ( 1279) 00:15:21.081 3.354 - 3.379: 77.1885% ( 1182) 00:15:21.081 3.379 - 3.405: 81.6691% ( 757) 00:15:21.081 3.405 - 3.430: 85.5283% ( 652) 00:15:21.081 3.430 - 3.456: 87.3690% ( 311) 00:15:21.081 3.456 - 3.482: 88.3102% ( 159) 00:15:21.081 3.482 - 3.507: 89.1921% ( 149) 00:15:21.081 3.507 - 3.533: 90.6008% ( 238) 00:15:21.081 3.533 - 3.558: 92.1456% ( 261) 00:15:21.081 3.558 - 3.584: 93.8384% ( 286) 00:15:21.081 3.584 - 3.610: 95.4898% ( 279) 00:15:21.081 3.610 - 3.635: 96.7268% ( 209) 00:15:21.081 3.635 - 3.661: 97.8574% ( 191) 00:15:21.081 3.661 - 3.686: 98.6209% ( 129) 00:15:21.081 3.686 - 3.712: 99.0885% ( 79) 00:15:21.081 3.712 - 3.738: 99.3016% ( 36) 00:15:21.081 3.738 - 3.763: 99.4377% ( 23) 00:15:21.081 3.763 - 3.789: 99.5324% ( 16) 00:15:21.081 3.789 - 3.814: 99.5738% ( 7) 00:15:21.081 3.814 - 3.840: 99.6034% ( 5) 00:15:21.081 3.840 - 3.866: 99.6094% ( 1) 00:15:21.081 4.198 - 4.224: 99.6153% ( 1) 00:15:21.081 5.478 - 5.504: 99.6212% ( 1) 00:15:21.081 5.914 - 5.939: 99.6271% ( 1) 00:15:21.081 5.939 - 5.965: 99.6330% ( 1) 00:15:21.081 5.965 - 5.990: 99.6389% ( 1) 00:15:21.081 6.042 - 6.067: 99.6449% ( 1) 00:15:21.081 6.093 - 6.118: 99.6508% ( 1) 00:15:21.081 6.246 - 6.272: 99.6567% ( 1) 00:15:21.081 6.272 - 6.298: 99.6626% ( 1) 00:15:21.081 6.323 - 6.349: 99.6685% ( 1) 00:15:21.081 6.374 - 6.400: 99.6745% ( 1) 00:15:21.081 6.400 - 6.426: 99.6804% ( 1) 00:15:21.081 6.426 - 6.451: 99.6922% ( 2) 00:15:21.081 6.451 - 6.477: 99.6981% ( 1) 00:15:21.081 6.477 - 6.502: 99.7041% ( 1) 00:15:21.081 6.502 - 6.528: 99.7100% ( 1) 00:15:21.081 6.528 - 6.554: 99.7159% ( 1) 00:15:21.081 6.554 - 6.605: 99.7218% ( 1) 00:15:21.081 6.656 - 6.707: 99.7277% ( 1) 00:15:21.081 6.707 - 6.758: 99.7336% ( 1) 00:15:21.081 6.810 - 6.861: 99.7514% ( 3) 00:15:21.081 6.861 - 6.912: 99.7632% ( 2) 00:15:21.081 6.912 - 6.963: 99.7692% ( 1) 00:15:21.081 6.963 - 7.014: 99.7751% ( 1) 00:15:21.081 7.014 - 7.066: 99.7869% ( 2) 00:15:21.081 7.066 - 7.117: 99.8047% ( 3) 00:15:21.081 7.117 - 7.168: 99.8106% ( 1) 00:15:21.081 7.219 - 7.270: 99.8165% ( 1) 00:15:21.081 7.373 - 7.424: 99.8224% ( 1) 00:15:21.081 7.475 - 7.526: 99.8343% ( 2) 00:15:21.081 7.526 - 7.578: 99.8520% ( 3) 00:15:21.081 7.578 - 7.629: 99.8698% ( 3) 00:15:21.081 7.680 - 7.731: 99.8757% ( 1) 00:15:21.082 7.782 - 7.834: 99.8816% ( 1) 00:15:21.082 7.987 - 8.038: 99.8875% ( 1) 00:15:21.082 8.090 - 8.141: 99.8935% ( 1) 00:15:21.082 8.653 - 8.704: 99.8994% ( 1) 00:15:21.082 8.704 - 8.755: 99.9112% ( 2) 00:15:21.082 11.469 - 11.520: 99.9171% ( 1) 00:15:21.082 15.053 - 15.155: 99.9231% ( 1) 00:15:21.082 2752.512 - 2765.619: 99.9290% ( 1) 00:15:21.082 3984.589 - 4010.803: 99.9941% ( 11) 00:15:21.082 [2024-07-24 22:03:00.124241] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:21.082 4980.736 - 5006.950: 100.0000% ( 1) 00:15:21.082 00:15:21.082 Complete histogram 00:15:21.082 ================== 00:15:21.082 Range in us Cumulative Count 00:15:21.082 1.664 - 1.677: 0.0769% ( 13) 00:15:21.082 1.677 - 1.690: 0.1657% ( 15) 00:15:21.082 1.690 - 1.702: 0.1835% ( 3) 00:15:21.082 1.702 - 1.715: 4.3682% ( 707) 00:15:21.082 1.715 - 1.728: 47.3868% ( 7268) 00:15:21.082 1.728 - 1.741: 77.0109% ( 5005) 00:15:21.082 1.741 - 1.754: 81.7875% ( 807) 00:15:21.082 1.754 - 1.766: 84.6819% ( 489) 00:15:21.082 1.766 - 1.779: 90.4883% ( 981) 00:15:21.082 1.779 - 1.792: 95.7265% ( 885) 00:15:21.082 1.792 - 1.805: 97.8929% ( 366) 00:15:21.082 1.805 - 1.818: 98.5321% ( 108) 00:15:21.082 1.818 - 1.830: 98.7393% ( 35) 00:15:21.082 1.830 - 1.843: 98.9464% ( 35) 00:15:21.082 1.843 - 1.856: 99.0648% ( 20) 00:15:21.082 1.856 - 1.869: 99.1358% ( 12) 00:15:21.082 1.869 - 1.882: 99.1536% ( 3) 00:15:21.082 1.882 - 1.894: 99.2128% ( 10) 00:15:21.082 1.894 - 1.907: 99.2601% ( 8) 00:15:21.082 1.907 - 1.920: 99.2720% ( 2) 00:15:21.082 1.920 - 1.933: 99.2897% ( 3) 00:15:21.082 1.933 - 1.946: 99.2956% ( 1) 00:15:21.082 1.946 - 1.958: 99.3016% ( 1) 00:15:21.082 1.971 - 1.984: 99.3075% ( 1) 00:15:21.082 2.010 - 2.022: 99.3134% ( 1) 00:15:21.082 2.022 - 2.035: 99.3193% ( 1) 00:15:21.082 2.035 - 2.048: 99.3312% ( 2) 00:15:21.082 2.048 - 2.061: 99.3371% ( 1) 00:15:21.082 2.125 - 2.138: 99.3430% ( 1) 00:15:21.082 2.214 - 2.227: 99.3489% ( 1) 00:15:21.082 3.123 - 3.136: 99.3548% ( 1) 00:15:21.082 4.762 - 4.787: 99.3608% ( 1) 00:15:21.082 4.864 - 4.890: 99.3667% ( 1) 00:15:21.082 4.915 - 4.941: 99.3726% ( 1) 00:15:21.082 4.966 - 4.992: 99.3785% ( 1) 00:15:21.082 5.069 - 5.094: 99.3844% ( 1) 00:15:21.082 5.146 - 5.171: 99.3904% ( 1) 00:15:21.082 5.171 - 5.197: 99.3963% ( 1) 00:15:21.082 5.197 - 5.222: 99.4022% ( 1) 00:15:21.082 5.299 - 5.325: 99.4081% ( 1) 00:15:21.082 5.325 - 5.350: 99.4140% ( 1) 00:15:21.082 5.402 - 5.427: 99.4199% ( 1) 00:15:21.082 5.427 - 5.453: 99.4259% ( 1) 00:15:21.082 5.504 - 5.530: 99.4377% ( 2) 00:15:21.082 5.555 - 5.581: 99.4495% ( 2) 00:15:21.082 5.606 - 5.632: 99.4555% ( 1) 00:15:21.082 5.811 - 5.837: 99.4614% ( 1) 00:15:21.082 5.837 - 5.862: 99.4732% ( 2) 00:15:21.082 5.862 - 5.888: 99.4791% ( 1) 00:15:21.082 5.990 - 6.016: 99.4851% ( 1) 00:15:21.082 6.042 - 6.067: 99.4910% ( 1) 00:15:21.082 6.221 - 6.246: 99.4969% ( 1) 00:15:21.082 6.272 - 6.298: 99.5028% ( 1) 00:15:21.082 6.451 - 6.477: 99.5087% ( 1) 00:15:21.082 11.059 - 11.110: 99.5146% ( 1) 00:15:21.082 14.131 - 14.234: 99.5206% ( 1) 00:15:21.082 17.715 - 17.818: 99.5265% ( 1) 00:15:21.082 49.562 - 49.766: 99.5324% ( 1) 00:15:21.082 2896.691 - 2909.798: 99.5383% ( 1) 00:15:21.082 3434.086 - 3460.301: 99.5442% ( 1) 00:15:21.082 3984.589 - 4010.803: 99.9941% ( 76) 00:15:21.082 4063.232 - 4089.446: 100.0000% ( 1) 00:15:21.082 00:15:21.082 22:03:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:21.082 22:03:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:21.082 22:03:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:21.082 22:03:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:21.082 22:03:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:21.339 [ 00:15:21.339 { 00:15:21.339 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:21.339 "subtype": "Discovery", 00:15:21.339 "listen_addresses": [], 00:15:21.339 "allow_any_host": true, 00:15:21.339 "hosts": [] 00:15:21.339 }, 00:15:21.339 { 00:15:21.339 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:21.339 "subtype": "NVMe", 00:15:21.339 "listen_addresses": [ 00:15:21.339 { 00:15:21.339 "trtype": "VFIOUSER", 00:15:21.339 "adrfam": "IPv4", 00:15:21.339 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:21.339 "trsvcid": "0" 00:15:21.339 } 00:15:21.339 ], 00:15:21.339 "allow_any_host": true, 00:15:21.339 "hosts": [], 00:15:21.339 "serial_number": "SPDK1", 00:15:21.339 "model_number": "SPDK bdev Controller", 00:15:21.339 "max_namespaces": 32, 00:15:21.339 "min_cntlid": 1, 00:15:21.339 "max_cntlid": 65519, 00:15:21.339 "namespaces": [ 00:15:21.339 { 00:15:21.339 "nsid": 1, 00:15:21.339 "bdev_name": "Malloc1", 00:15:21.339 "name": "Malloc1", 00:15:21.339 "nguid": "81DC1E028DB34AC98682C8A84ADAD321", 00:15:21.339 "uuid": "81dc1e02-8db3-4ac9-8682-c8a84adad321" 00:15:21.339 } 00:15:21.339 ] 00:15:21.339 }, 00:15:21.339 { 00:15:21.339 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:21.339 "subtype": "NVMe", 00:15:21.339 "listen_addresses": [ 00:15:21.339 { 00:15:21.339 "trtype": "VFIOUSER", 00:15:21.339 "adrfam": "IPv4", 00:15:21.339 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:21.339 "trsvcid": "0" 00:15:21.339 } 00:15:21.339 ], 00:15:21.339 "allow_any_host": true, 00:15:21.339 "hosts": [], 00:15:21.339 "serial_number": "SPDK2", 00:15:21.339 "model_number": "SPDK bdev Controller", 00:15:21.339 "max_namespaces": 32, 00:15:21.339 "min_cntlid": 1, 00:15:21.339 "max_cntlid": 65519, 00:15:21.339 "namespaces": [ 00:15:21.339 { 00:15:21.339 "nsid": 1, 00:15:21.339 "bdev_name": "Malloc2", 00:15:21.339 "name": "Malloc2", 00:15:21.339 "nguid": "08C257ADCFC343579CD1A6738892586D", 00:15:21.339 "uuid": "08c257ad-cfc3-4357-9cd1-a6738892586d" 00:15:21.339 } 00:15:21.339 ] 00:15:21.339 } 00:15:21.339 ] 00:15:21.339 22:03:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:21.339 22:03:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2667159 00:15:21.339 22:03:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:21.339 22:03:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:21.339 22:03:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:21.339 22:03:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:21.339 22:03:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:21.339 22:03:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:21.339 22:03:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:21.339 22:03:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:21.339 EAL: No free 2048 kB hugepages reported on node 1 00:15:21.339 [2024-07-24 22:03:00.523101] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:21.339 Malloc3 00:15:21.596 22:03:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:21.596 [2024-07-24 22:03:00.716462] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:21.596 22:03:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:21.596 Asynchronous Event Request test 00:15:21.596 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:21.596 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:21.596 Registering asynchronous event callbacks... 00:15:21.596 Starting namespace attribute notice tests for all controllers... 00:15:21.596 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:21.596 aer_cb - Changed Namespace 00:15:21.596 Cleaning up... 00:15:21.856 [ 00:15:21.856 { 00:15:21.856 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:21.856 "subtype": "Discovery", 00:15:21.856 "listen_addresses": [], 00:15:21.856 "allow_any_host": true, 00:15:21.856 "hosts": [] 00:15:21.856 }, 00:15:21.856 { 00:15:21.856 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:21.856 "subtype": "NVMe", 00:15:21.856 "listen_addresses": [ 00:15:21.856 { 00:15:21.856 "trtype": "VFIOUSER", 00:15:21.856 "adrfam": "IPv4", 00:15:21.856 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:21.856 "trsvcid": "0" 00:15:21.856 } 00:15:21.856 ], 00:15:21.856 "allow_any_host": true, 00:15:21.856 "hosts": [], 00:15:21.856 "serial_number": "SPDK1", 00:15:21.856 "model_number": "SPDK bdev Controller", 00:15:21.856 "max_namespaces": 32, 00:15:21.856 "min_cntlid": 1, 00:15:21.856 "max_cntlid": 65519, 00:15:21.856 "namespaces": [ 00:15:21.856 { 00:15:21.856 "nsid": 1, 00:15:21.856 "bdev_name": "Malloc1", 00:15:21.856 "name": "Malloc1", 00:15:21.856 "nguid": "81DC1E028DB34AC98682C8A84ADAD321", 00:15:21.856 "uuid": "81dc1e02-8db3-4ac9-8682-c8a84adad321" 00:15:21.856 }, 00:15:21.856 { 00:15:21.856 "nsid": 2, 00:15:21.856 "bdev_name": "Malloc3", 00:15:21.856 "name": "Malloc3", 00:15:21.856 "nguid": "5C23C33D3A0A41E9AD02ED75DD7E2925", 00:15:21.856 "uuid": "5c23c33d-3a0a-41e9-ad02-ed75dd7e2925" 00:15:21.856 } 00:15:21.856 ] 00:15:21.856 }, 00:15:21.856 { 00:15:21.856 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:21.856 "subtype": "NVMe", 00:15:21.856 "listen_addresses": [ 00:15:21.856 { 00:15:21.856 "trtype": "VFIOUSER", 00:15:21.856 "adrfam": "IPv4", 00:15:21.856 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:21.856 "trsvcid": "0" 00:15:21.856 } 00:15:21.856 ], 00:15:21.856 "allow_any_host": true, 00:15:21.856 "hosts": [], 00:15:21.856 "serial_number": "SPDK2", 00:15:21.856 "model_number": "SPDK bdev Controller", 00:15:21.856 "max_namespaces": 32, 00:15:21.856 "min_cntlid": 1, 00:15:21.856 "max_cntlid": 65519, 00:15:21.856 "namespaces": [ 00:15:21.856 { 00:15:21.856 "nsid": 1, 00:15:21.856 "bdev_name": "Malloc2", 00:15:21.856 "name": "Malloc2", 00:15:21.856 "nguid": "08C257ADCFC343579CD1A6738892586D", 00:15:21.856 "uuid": "08c257ad-cfc3-4357-9cd1-a6738892586d" 00:15:21.856 } 00:15:21.856 ] 00:15:21.856 } 00:15:21.856 ] 00:15:21.856 22:03:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2667159 00:15:21.856 22:03:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:21.856 22:03:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:21.856 22:03:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:21.856 22:03:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:21.856 [2024-07-24 22:03:00.936946] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:15:21.856 [2024-07-24 22:03:00.936980] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2667211 ] 00:15:21.856 EAL: No free 2048 kB hugepages reported on node 1 00:15:21.856 [2024-07-24 22:03:00.963843] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:21.856 [2024-07-24 22:03:00.975941] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:21.856 [2024-07-24 22:03:00.975964] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f9ffe527000 00:15:21.856 [2024-07-24 22:03:00.976940] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:21.856 [2024-07-24 22:03:00.977946] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:21.856 [2024-07-24 22:03:00.978952] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:21.856 [2024-07-24 22:03:00.979957] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:21.856 [2024-07-24 22:03:00.980963] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:21.856 [2024-07-24 22:03:00.981974] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:21.856 [2024-07-24 22:03:00.982981] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:21.856 [2024-07-24 22:03:00.983985] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:21.856 [2024-07-24 22:03:00.984996] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:21.856 [2024-07-24 22:03:00.985008] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f9ffe51c000 00:15:21.856 [2024-07-24 22:03:00.985901] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:21.856 [2024-07-24 22:03:00.995107] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:21.856 [2024-07-24 22:03:00.995130] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:21.856 [2024-07-24 22:03:01.000215] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:21.856 [2024-07-24 22:03:01.000253] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:21.856 [2024-07-24 22:03:01.000320] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:21.856 [2024-07-24 22:03:01.000337] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:21.856 [2024-07-24 22:03:01.000344] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:21.857 [2024-07-24 22:03:01.001217] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:21.857 [2024-07-24 22:03:01.001232] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:21.857 [2024-07-24 22:03:01.001241] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:21.857 [2024-07-24 22:03:01.002223] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:21.857 [2024-07-24 22:03:01.002234] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:21.857 [2024-07-24 22:03:01.002243] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:21.857 [2024-07-24 22:03:01.003226] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:21.857 [2024-07-24 22:03:01.003237] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:21.857 [2024-07-24 22:03:01.004230] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:21.857 [2024-07-24 22:03:01.004243] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:21.857 [2024-07-24 22:03:01.004250] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:21.857 [2024-07-24 22:03:01.004259] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:21.857 [2024-07-24 22:03:01.004366] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:21.857 [2024-07-24 22:03:01.004373] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:21.857 [2024-07-24 22:03:01.004379] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:21.857 [2024-07-24 22:03:01.005238] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:21.857 [2024-07-24 22:03:01.006245] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:21.857 [2024-07-24 22:03:01.007254] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:21.857 [2024-07-24 22:03:01.008253] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:21.857 [2024-07-24 22:03:01.008294] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:21.857 [2024-07-24 22:03:01.009267] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:21.857 [2024-07-24 22:03:01.009278] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:21.857 [2024-07-24 22:03:01.009285] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:21.857 [2024-07-24 22:03:01.009304] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:21.857 [2024-07-24 22:03:01.009313] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:21.857 [2024-07-24 22:03:01.009326] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:21.857 [2024-07-24 22:03:01.009333] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:21.857 [2024-07-24 22:03:01.009338] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:21.857 [2024-07-24 22:03:01.009351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:21.857 [2024-07-24 22:03:01.016724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:21.857 [2024-07-24 22:03:01.016738] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:21.857 [2024-07-24 22:03:01.016744] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:21.857 [2024-07-24 22:03:01.016750] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:21.857 [2024-07-24 22:03:01.016756] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:21.857 [2024-07-24 22:03:01.016765] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:21.857 [2024-07-24 22:03:01.016771] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:21.857 [2024-07-24 22:03:01.016777] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:21.857 [2024-07-24 22:03:01.016786] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:21.857 [2024-07-24 22:03:01.016799] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:21.857 [2024-07-24 22:03:01.024720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:21.857 [2024-07-24 22:03:01.024736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.857 [2024-07-24 22:03:01.024745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.857 [2024-07-24 22:03:01.024754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.857 [2024-07-24 22:03:01.024763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.857 [2024-07-24 22:03:01.024770] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:21.857 [2024-07-24 22:03:01.024780] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:21.857 [2024-07-24 22:03:01.024790] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:21.857 [2024-07-24 22:03:01.032720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:21.857 [2024-07-24 22:03:01.032730] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:21.857 [2024-07-24 22:03:01.032736] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:21.857 [2024-07-24 22:03:01.032747] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:21.857 [2024-07-24 22:03:01.032753] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:21.857 [2024-07-24 22:03:01.032763] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:21.857 [2024-07-24 22:03:01.040720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:21.857 [2024-07-24 22:03:01.040773] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:21.857 [2024-07-24 22:03:01.040784] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:21.857 [2024-07-24 22:03:01.040792] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:21.857 [2024-07-24 22:03:01.040798] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:21.857 [2024-07-24 22:03:01.040803] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:21.857 [2024-07-24 22:03:01.040812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:21.857 [2024-07-24 22:03:01.048725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:21.857 [2024-07-24 22:03:01.048742] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:21.857 [2024-07-24 22:03:01.048754] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:21.857 [2024-07-24 22:03:01.048763] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:21.857 [2024-07-24 22:03:01.048771] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:21.857 [2024-07-24 22:03:01.048777] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:21.857 [2024-07-24 22:03:01.048782] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:21.857 [2024-07-24 22:03:01.048789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:21.857 [2024-07-24 22:03:01.056721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:21.857 [2024-07-24 22:03:01.056737] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:21.857 [2024-07-24 22:03:01.056747] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:21.857 [2024-07-24 22:03:01.056755] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:21.857 [2024-07-24 22:03:01.056761] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:21.857 [2024-07-24 22:03:01.056766] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:21.857 [2024-07-24 22:03:01.056773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:21.857 [2024-07-24 22:03:01.064720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:21.857 [2024-07-24 22:03:01.064731] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:21.857 [2024-07-24 22:03:01.064740] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:21.858 [2024-07-24 22:03:01.064749] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:21.858 [2024-07-24 22:03:01.064758] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:15:21.858 [2024-07-24 22:03:01.064764] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:21.858 [2024-07-24 22:03:01.064771] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:21.858 [2024-07-24 22:03:01.064778] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:21.858 [2024-07-24 22:03:01.064784] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:21.858 [2024-07-24 22:03:01.064792] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:21.858 [2024-07-24 22:03:01.064810] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:22.116 [2024-07-24 22:03:01.072722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:22.116 [2024-07-24 22:03:01.072738] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:22.117 [2024-07-24 22:03:01.080721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:22.117 [2024-07-24 22:03:01.080736] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:22.117 [2024-07-24 22:03:01.088722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:22.117 [2024-07-24 22:03:01.088737] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:22.117 [2024-07-24 22:03:01.096721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:22.117 [2024-07-24 22:03:01.096740] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:22.117 [2024-07-24 22:03:01.096746] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:22.117 [2024-07-24 22:03:01.096750] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:22.117 [2024-07-24 22:03:01.096755] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:22.117 [2024-07-24 22:03:01.096760] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:22.117 [2024-07-24 22:03:01.096767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:22.117 [2024-07-24 22:03:01.096775] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:22.117 [2024-07-24 22:03:01.096781] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:22.117 [2024-07-24 22:03:01.096786] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:22.117 [2024-07-24 22:03:01.096793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:22.117 [2024-07-24 22:03:01.096801] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:22.117 [2024-07-24 22:03:01.096806] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:22.117 [2024-07-24 22:03:01.096811] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:22.117 [2024-07-24 22:03:01.096818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:22.117 [2024-07-24 22:03:01.096826] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:22.117 [2024-07-24 22:03:01.096832] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:22.117 [2024-07-24 22:03:01.096836] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:22.117 [2024-07-24 22:03:01.096843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:22.117 [2024-07-24 22:03:01.104721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:22.117 [2024-07-24 22:03:01.104737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:22.117 [2024-07-24 22:03:01.104752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:22.117 [2024-07-24 22:03:01.104761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:22.117 ===================================================== 00:15:22.117 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:22.117 ===================================================== 00:15:22.117 Controller Capabilities/Features 00:15:22.117 ================================ 00:15:22.117 Vendor ID: 4e58 00:15:22.117 Subsystem Vendor ID: 4e58 00:15:22.117 Serial Number: SPDK2 00:15:22.117 Model Number: SPDK bdev Controller 00:15:22.117 Firmware Version: 24.09 00:15:22.117 Recommended Arb Burst: 6 00:15:22.117 IEEE OUI Identifier: 8d 6b 50 00:15:22.117 Multi-path I/O 00:15:22.117 May have multiple subsystem ports: Yes 00:15:22.117 May have multiple controllers: Yes 00:15:22.117 Associated with SR-IOV VF: No 00:15:22.117 Max Data Transfer Size: 131072 00:15:22.117 Max Number of Namespaces: 32 00:15:22.117 Max Number of I/O Queues: 127 00:15:22.117 NVMe Specification Version (VS): 1.3 00:15:22.117 NVMe Specification Version (Identify): 1.3 00:15:22.117 Maximum Queue Entries: 256 00:15:22.117 Contiguous Queues Required: Yes 00:15:22.117 Arbitration Mechanisms Supported 00:15:22.117 Weighted Round Robin: Not Supported 00:15:22.117 Vendor Specific: Not Supported 00:15:22.117 Reset Timeout: 15000 ms 00:15:22.117 Doorbell Stride: 4 bytes 00:15:22.117 NVM Subsystem Reset: Not Supported 00:15:22.117 Command Sets Supported 00:15:22.117 NVM Command Set: Supported 00:15:22.117 Boot Partition: Not Supported 00:15:22.117 Memory Page Size Minimum: 4096 bytes 00:15:22.117 Memory Page Size Maximum: 4096 bytes 00:15:22.117 Persistent Memory Region: Not Supported 00:15:22.117 Optional Asynchronous Events Supported 00:15:22.117 Namespace Attribute Notices: Supported 00:15:22.117 Firmware Activation Notices: Not Supported 00:15:22.117 ANA Change Notices: Not Supported 00:15:22.117 PLE Aggregate Log Change Notices: Not Supported 00:15:22.117 LBA Status Info Alert Notices: Not Supported 00:15:22.117 EGE Aggregate Log Change Notices: Not Supported 00:15:22.117 Normal NVM Subsystem Shutdown event: Not Supported 00:15:22.117 Zone Descriptor Change Notices: Not Supported 00:15:22.117 Discovery Log Change Notices: Not Supported 00:15:22.117 Controller Attributes 00:15:22.117 128-bit Host Identifier: Supported 00:15:22.117 Non-Operational Permissive Mode: Not Supported 00:15:22.117 NVM Sets: Not Supported 00:15:22.117 Read Recovery Levels: Not Supported 00:15:22.117 Endurance Groups: Not Supported 00:15:22.117 Predictable Latency Mode: Not Supported 00:15:22.117 Traffic Based Keep ALive: Not Supported 00:15:22.117 Namespace Granularity: Not Supported 00:15:22.117 SQ Associations: Not Supported 00:15:22.117 UUID List: Not Supported 00:15:22.117 Multi-Domain Subsystem: Not Supported 00:15:22.117 Fixed Capacity Management: Not Supported 00:15:22.117 Variable Capacity Management: Not Supported 00:15:22.117 Delete Endurance Group: Not Supported 00:15:22.117 Delete NVM Set: Not Supported 00:15:22.117 Extended LBA Formats Supported: Not Supported 00:15:22.117 Flexible Data Placement Supported: Not Supported 00:15:22.117 00:15:22.117 Controller Memory Buffer Support 00:15:22.117 ================================ 00:15:22.117 Supported: No 00:15:22.117 00:15:22.117 Persistent Memory Region Support 00:15:22.117 ================================ 00:15:22.117 Supported: No 00:15:22.117 00:15:22.117 Admin Command Set Attributes 00:15:22.117 ============================ 00:15:22.117 Security Send/Receive: Not Supported 00:15:22.117 Format NVM: Not Supported 00:15:22.117 Firmware Activate/Download: Not Supported 00:15:22.117 Namespace Management: Not Supported 00:15:22.117 Device Self-Test: Not Supported 00:15:22.117 Directives: Not Supported 00:15:22.117 NVMe-MI: Not Supported 00:15:22.117 Virtualization Management: Not Supported 00:15:22.117 Doorbell Buffer Config: Not Supported 00:15:22.117 Get LBA Status Capability: Not Supported 00:15:22.117 Command & Feature Lockdown Capability: Not Supported 00:15:22.117 Abort Command Limit: 4 00:15:22.117 Async Event Request Limit: 4 00:15:22.117 Number of Firmware Slots: N/A 00:15:22.117 Firmware Slot 1 Read-Only: N/A 00:15:22.117 Firmware Activation Without Reset: N/A 00:15:22.117 Multiple Update Detection Support: N/A 00:15:22.117 Firmware Update Granularity: No Information Provided 00:15:22.117 Per-Namespace SMART Log: No 00:15:22.117 Asymmetric Namespace Access Log Page: Not Supported 00:15:22.117 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:22.117 Command Effects Log Page: Supported 00:15:22.117 Get Log Page Extended Data: Supported 00:15:22.117 Telemetry Log Pages: Not Supported 00:15:22.117 Persistent Event Log Pages: Not Supported 00:15:22.117 Supported Log Pages Log Page: May Support 00:15:22.117 Commands Supported & Effects Log Page: Not Supported 00:15:22.117 Feature Identifiers & Effects Log Page:May Support 00:15:22.117 NVMe-MI Commands & Effects Log Page: May Support 00:15:22.117 Data Area 4 for Telemetry Log: Not Supported 00:15:22.117 Error Log Page Entries Supported: 128 00:15:22.117 Keep Alive: Supported 00:15:22.117 Keep Alive Granularity: 10000 ms 00:15:22.117 00:15:22.117 NVM Command Set Attributes 00:15:22.117 ========================== 00:15:22.117 Submission Queue Entry Size 00:15:22.117 Max: 64 00:15:22.117 Min: 64 00:15:22.117 Completion Queue Entry Size 00:15:22.117 Max: 16 00:15:22.117 Min: 16 00:15:22.117 Number of Namespaces: 32 00:15:22.117 Compare Command: Supported 00:15:22.117 Write Uncorrectable Command: Not Supported 00:15:22.117 Dataset Management Command: Supported 00:15:22.117 Write Zeroes Command: Supported 00:15:22.117 Set Features Save Field: Not Supported 00:15:22.117 Reservations: Not Supported 00:15:22.117 Timestamp: Not Supported 00:15:22.117 Copy: Supported 00:15:22.117 Volatile Write Cache: Present 00:15:22.117 Atomic Write Unit (Normal): 1 00:15:22.117 Atomic Write Unit (PFail): 1 00:15:22.117 Atomic Compare & Write Unit: 1 00:15:22.117 Fused Compare & Write: Supported 00:15:22.117 Scatter-Gather List 00:15:22.117 SGL Command Set: Supported (Dword aligned) 00:15:22.118 SGL Keyed: Not Supported 00:15:22.118 SGL Bit Bucket Descriptor: Not Supported 00:15:22.118 SGL Metadata Pointer: Not Supported 00:15:22.118 Oversized SGL: Not Supported 00:15:22.118 SGL Metadata Address: Not Supported 00:15:22.118 SGL Offset: Not Supported 00:15:22.118 Transport SGL Data Block: Not Supported 00:15:22.118 Replay Protected Memory Block: Not Supported 00:15:22.118 00:15:22.118 Firmware Slot Information 00:15:22.118 ========================= 00:15:22.118 Active slot: 1 00:15:22.118 Slot 1 Firmware Revision: 24.09 00:15:22.118 00:15:22.118 00:15:22.118 Commands Supported and Effects 00:15:22.118 ============================== 00:15:22.118 Admin Commands 00:15:22.118 -------------- 00:15:22.118 Get Log Page (02h): Supported 00:15:22.118 Identify (06h): Supported 00:15:22.118 Abort (08h): Supported 00:15:22.118 Set Features (09h): Supported 00:15:22.118 Get Features (0Ah): Supported 00:15:22.118 Asynchronous Event Request (0Ch): Supported 00:15:22.118 Keep Alive (18h): Supported 00:15:22.118 I/O Commands 00:15:22.118 ------------ 00:15:22.118 Flush (00h): Supported LBA-Change 00:15:22.118 Write (01h): Supported LBA-Change 00:15:22.118 Read (02h): Supported 00:15:22.118 Compare (05h): Supported 00:15:22.118 Write Zeroes (08h): Supported LBA-Change 00:15:22.118 Dataset Management (09h): Supported LBA-Change 00:15:22.118 Copy (19h): Supported LBA-Change 00:15:22.118 00:15:22.118 Error Log 00:15:22.118 ========= 00:15:22.118 00:15:22.118 Arbitration 00:15:22.118 =========== 00:15:22.118 Arbitration Burst: 1 00:15:22.118 00:15:22.118 Power Management 00:15:22.118 ================ 00:15:22.118 Number of Power States: 1 00:15:22.118 Current Power State: Power State #0 00:15:22.118 Power State #0: 00:15:22.118 Max Power: 0.00 W 00:15:22.118 Non-Operational State: Operational 00:15:22.118 Entry Latency: Not Reported 00:15:22.118 Exit Latency: Not Reported 00:15:22.118 Relative Read Throughput: 0 00:15:22.118 Relative Read Latency: 0 00:15:22.118 Relative Write Throughput: 0 00:15:22.118 Relative Write Latency: 0 00:15:22.118 Idle Power: Not Reported 00:15:22.118 Active Power: Not Reported 00:15:22.118 Non-Operational Permissive Mode: Not Supported 00:15:22.118 00:15:22.118 Health Information 00:15:22.118 ================== 00:15:22.118 Critical Warnings: 00:15:22.118 Available Spare Space: OK 00:15:22.118 Temperature: OK 00:15:22.118 Device Reliability: OK 00:15:22.118 Read Only: No 00:15:22.118 Volatile Memory Backup: OK 00:15:22.118 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:22.118 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:22.118 Available Spare: 0% 00:15:22.118 Available Sp[2024-07-24 22:03:01.104849] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:22.118 [2024-07-24 22:03:01.112722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:22.118 [2024-07-24 22:03:01.112753] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:22.118 [2024-07-24 22:03:01.112764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.118 [2024-07-24 22:03:01.112772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.118 [2024-07-24 22:03:01.112780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.118 [2024-07-24 22:03:01.112787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.118 [2024-07-24 22:03:01.112833] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:22.118 [2024-07-24 22:03:01.112844] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:22.118 [2024-07-24 22:03:01.113833] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:22.118 [2024-07-24 22:03:01.113878] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:22.118 [2024-07-24 22:03:01.113886] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:22.118 [2024-07-24 22:03:01.114845] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:22.118 [2024-07-24 22:03:01.114858] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:22.118 [2024-07-24 22:03:01.114904] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:22.118 [2024-07-24 22:03:01.115866] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:22.118 are Threshold: 0% 00:15:22.118 Life Percentage Used: 0% 00:15:22.118 Data Units Read: 0 00:15:22.118 Data Units Written: 0 00:15:22.118 Host Read Commands: 0 00:15:22.118 Host Write Commands: 0 00:15:22.118 Controller Busy Time: 0 minutes 00:15:22.118 Power Cycles: 0 00:15:22.118 Power On Hours: 0 hours 00:15:22.118 Unsafe Shutdowns: 0 00:15:22.118 Unrecoverable Media Errors: 0 00:15:22.118 Lifetime Error Log Entries: 0 00:15:22.118 Warning Temperature Time: 0 minutes 00:15:22.118 Critical Temperature Time: 0 minutes 00:15:22.118 00:15:22.118 Number of Queues 00:15:22.118 ================ 00:15:22.118 Number of I/O Submission Queues: 127 00:15:22.118 Number of I/O Completion Queues: 127 00:15:22.118 00:15:22.118 Active Namespaces 00:15:22.118 ================= 00:15:22.118 Namespace ID:1 00:15:22.118 Error Recovery Timeout: Unlimited 00:15:22.118 Command Set Identifier: NVM (00h) 00:15:22.118 Deallocate: Supported 00:15:22.118 Deallocated/Unwritten Error: Not Supported 00:15:22.118 Deallocated Read Value: Unknown 00:15:22.118 Deallocate in Write Zeroes: Not Supported 00:15:22.118 Deallocated Guard Field: 0xFFFF 00:15:22.118 Flush: Supported 00:15:22.118 Reservation: Supported 00:15:22.118 Namespace Sharing Capabilities: Multiple Controllers 00:15:22.118 Size (in LBAs): 131072 (0GiB) 00:15:22.118 Capacity (in LBAs): 131072 (0GiB) 00:15:22.118 Utilization (in LBAs): 131072 (0GiB) 00:15:22.118 NGUID: 08C257ADCFC343579CD1A6738892586D 00:15:22.118 UUID: 08c257ad-cfc3-4357-9cd1-a6738892586d 00:15:22.118 Thin Provisioning: Not Supported 00:15:22.118 Per-NS Atomic Units: Yes 00:15:22.118 Atomic Boundary Size (Normal): 0 00:15:22.118 Atomic Boundary Size (PFail): 0 00:15:22.118 Atomic Boundary Offset: 0 00:15:22.118 Maximum Single Source Range Length: 65535 00:15:22.118 Maximum Copy Length: 65535 00:15:22.118 Maximum Source Range Count: 1 00:15:22.118 NGUID/EUI64 Never Reused: No 00:15:22.118 Namespace Write Protected: No 00:15:22.118 Number of LBA Formats: 1 00:15:22.118 Current LBA Format: LBA Format #00 00:15:22.118 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:22.118 00:15:22.118 22:03:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:22.118 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.118 [2024-07-24 22:03:01.324683] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:27.655 Initializing NVMe Controllers 00:15:27.655 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:27.655 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:27.655 Initialization complete. Launching workers. 00:15:27.655 ======================================================== 00:15:27.655 Latency(us) 00:15:27.655 Device Information : IOPS MiB/s Average min max 00:15:27.655 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39962.17 156.10 3202.66 917.05 8116.31 00:15:27.655 ======================================================== 00:15:27.655 Total : 39962.17 156.10 3202.66 917.05 8116.31 00:15:27.655 00:15:27.655 [2024-07-24 22:03:06.428965] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:27.655 22:03:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:27.655 EAL: No free 2048 kB hugepages reported on node 1 00:15:27.655 [2024-07-24 22:03:06.648594] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:32.917 Initializing NVMe Controllers 00:15:32.917 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:32.917 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:32.917 Initialization complete. Launching workers. 00:15:32.917 ======================================================== 00:15:32.917 Latency(us) 00:15:32.917 Device Information : IOPS MiB/s Average min max 00:15:32.917 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39926.98 155.96 3205.68 918.81 10659.41 00:15:32.917 ======================================================== 00:15:32.917 Total : 39926.98 155.96 3205.68 918.81 10659.41 00:15:32.917 00:15:32.917 [2024-07-24 22:03:11.669372] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:32.917 22:03:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:32.917 EAL: No free 2048 kB hugepages reported on node 1 00:15:32.917 [2024-07-24 22:03:11.879415] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:38.181 [2024-07-24 22:03:17.014813] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:38.181 Initializing NVMe Controllers 00:15:38.181 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:38.181 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:38.181 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:38.181 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:38.181 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:38.181 Initialization complete. Launching workers. 00:15:38.181 Starting thread on core 2 00:15:38.181 Starting thread on core 3 00:15:38.181 Starting thread on core 1 00:15:38.181 22:03:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:38.181 EAL: No free 2048 kB hugepages reported on node 1 00:15:38.181 [2024-07-24 22:03:17.318142] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:41.513 [2024-07-24 22:03:20.369921] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:41.513 Initializing NVMe Controllers 00:15:41.513 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:41.513 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:41.513 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:41.513 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:41.513 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:41.513 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:41.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:41.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:41.513 Initialization complete. Launching workers. 00:15:41.513 Starting thread on core 1 with urgent priority queue 00:15:41.513 Starting thread on core 2 with urgent priority queue 00:15:41.513 Starting thread on core 3 with urgent priority queue 00:15:41.513 Starting thread on core 0 with urgent priority queue 00:15:41.513 SPDK bdev Controller (SPDK2 ) core 0: 10692.33 IO/s 9.35 secs/100000 ios 00:15:41.513 SPDK bdev Controller (SPDK2 ) core 1: 9871.33 IO/s 10.13 secs/100000 ios 00:15:41.513 SPDK bdev Controller (SPDK2 ) core 2: 11617.00 IO/s 8.61 secs/100000 ios 00:15:41.513 SPDK bdev Controller (SPDK2 ) core 3: 9961.67 IO/s 10.04 secs/100000 ios 00:15:41.513 ======================================================== 00:15:41.513 00:15:41.513 22:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:41.513 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.513 [2024-07-24 22:03:20.667181] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:41.513 Initializing NVMe Controllers 00:15:41.513 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:41.513 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:41.513 Namespace ID: 1 size: 0GB 00:15:41.513 Initialization complete. 00:15:41.513 INFO: using host memory buffer for IO 00:15:41.513 Hello world! 00:15:41.513 [2024-07-24 22:03:20.678257] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:41.513 22:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:41.772 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.772 [2024-07-24 22:03:20.962927] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:43.152 Initializing NVMe Controllers 00:15:43.152 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:43.152 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:43.152 Initialization complete. Launching workers. 00:15:43.152 submit (in ns) avg, min, max = 7573.0, 3098.4, 4000149.6 00:15:43.152 complete (in ns) avg, min, max = 22034.0, 1700.0, 4994835.2 00:15:43.152 00:15:43.152 Submit histogram 00:15:43.152 ================ 00:15:43.152 Range in us Cumulative Count 00:15:43.152 3.098 - 3.110: 0.1248% ( 21) 00:15:43.152 3.110 - 3.123: 0.7013% ( 97) 00:15:43.152 3.123 - 3.136: 1.9968% ( 218) 00:15:43.152 3.136 - 3.149: 4.3680% ( 399) 00:15:43.153 3.149 - 3.162: 7.6306% ( 549) 00:15:43.153 3.162 - 3.174: 12.3017% ( 786) 00:15:43.153 3.174 - 3.187: 18.0305% ( 964) 00:15:43.153 3.187 - 3.200: 23.8961% ( 987) 00:15:43.153 3.200 - 3.213: 29.8627% ( 1004) 00:15:43.153 3.213 - 3.226: 36.1859% ( 1064) 00:15:43.153 3.226 - 3.238: 42.5923% ( 1078) 00:15:43.153 3.238 - 3.251: 49.0105% ( 1080) 00:15:43.153 3.251 - 3.264: 53.9966% ( 839) 00:15:43.153 3.264 - 3.277: 57.3959% ( 572) 00:15:43.153 3.277 - 3.302: 63.8260% ( 1082) 00:15:43.153 3.302 - 3.328: 68.9725% ( 866) 00:15:43.153 3.328 - 3.354: 74.6657% ( 958) 00:15:43.153 3.354 - 3.379: 83.5562% ( 1496) 00:15:43.153 3.379 - 3.405: 86.7534% ( 538) 00:15:43.153 3.405 - 3.430: 87.9420% ( 200) 00:15:43.153 3.430 - 3.456: 88.6373% ( 117) 00:15:43.153 3.456 - 3.482: 89.4752% ( 141) 00:15:43.153 3.482 - 3.507: 90.9669% ( 251) 00:15:43.153 3.507 - 3.533: 92.7735% ( 304) 00:15:43.153 3.533 - 3.558: 94.5564% ( 300) 00:15:43.153 3.558 - 3.584: 95.6261% ( 180) 00:15:43.153 3.584 - 3.610: 96.8503% ( 206) 00:15:43.153 3.610 - 3.635: 98.0924% ( 209) 00:15:43.153 3.635 - 3.661: 98.8055% ( 120) 00:15:43.153 3.661 - 3.686: 99.1799% ( 63) 00:15:43.153 3.686 - 3.712: 99.4176% ( 40) 00:15:43.153 3.712 - 3.738: 99.5127% ( 16) 00:15:43.153 3.738 - 3.763: 99.6018% ( 15) 00:15:43.153 3.763 - 3.789: 99.6137% ( 2) 00:15:43.153 5.581 - 5.606: 99.6197% ( 1) 00:15:43.153 5.632 - 5.658: 99.6256% ( 1) 00:15:43.153 5.709 - 5.734: 99.6375% ( 2) 00:15:43.153 5.734 - 5.760: 99.6494% ( 2) 00:15:43.153 5.862 - 5.888: 99.6553% ( 1) 00:15:43.153 5.888 - 5.914: 99.6613% ( 1) 00:15:43.153 6.042 - 6.067: 99.6672% ( 1) 00:15:43.153 6.093 - 6.118: 99.6791% ( 2) 00:15:43.153 6.272 - 6.298: 99.6850% ( 1) 00:15:43.153 6.400 - 6.426: 99.6969% ( 2) 00:15:43.153 6.451 - 6.477: 99.7029% ( 1) 00:15:43.153 6.477 - 6.502: 99.7088% ( 1) 00:15:43.153 6.502 - 6.528: 99.7147% ( 1) 00:15:43.153 6.528 - 6.554: 99.7207% ( 1) 00:15:43.153 6.554 - 6.605: 99.7266% ( 1) 00:15:43.153 6.656 - 6.707: 99.7326% ( 1) 00:15:43.153 6.707 - 6.758: 99.7385% ( 1) 00:15:43.153 6.758 - 6.810: 99.7445% ( 1) 00:15:43.153 6.810 - 6.861: 99.7563% ( 2) 00:15:43.153 6.861 - 6.912: 99.7623% ( 1) 00:15:43.153 6.912 - 6.963: 99.7682% ( 1) 00:15:43.153 6.963 - 7.014: 99.7801% ( 2) 00:15:43.153 7.014 - 7.066: 99.7861% ( 1) 00:15:43.153 7.066 - 7.117: 99.7979% ( 2) 00:15:43.153 7.117 - 7.168: 99.8039% ( 1) 00:15:43.153 7.219 - 7.270: 99.8158% ( 2) 00:15:43.153 7.322 - 7.373: 99.8395% ( 4) 00:15:43.153 7.424 - 7.475: 99.8455% ( 1) 00:15:43.153 7.526 - 7.578: 99.8514% ( 1) 00:15:43.153 7.578 - 7.629: 99.8574% ( 1) 00:15:43.153 7.680 - 7.731: 99.8633% ( 1) 00:15:43.153 7.936 - 7.987: 99.8693% ( 1) 00:15:43.153 8.192 - 8.243: 99.8752% ( 1) 00:15:43.153 8.755 - 8.806: 99.8811% ( 1) 00:15:43.153 8.960 - 9.011: 99.8871% ( 1) 00:15:43.153 12.800 - 12.851: 99.8930% ( 1) 00:15:43.153 3984.589 - 4010.803: 100.0000% ( 18) 00:15:43.153 00:15:43.153 Complete histogram 00:15:43.153 ================== 00:15:43.153 Range in us Cumulative Count 00:15:43.153 1.690 - 1.702: 0.0059% ( 1) 00:15:43.153 1.702 - 1.715: 0.6121% ( 102) 00:15:43.153 1.715 - 1.728: 9.5382% ( 1502) 00:15:43.153 1.728 - 1.741: 16.8776% ( 1235) 00:15:43.153 1.741 - 1.754: 18.6605% ( 300) 00:15:43.153 1.754 - 1.766: 23.4801% ( 811) 00:15:43.153 1.766 - [2024-07-24 22:03:22.061571] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:43.153 1.779: 63.1545% ( 6676) 00:15:43.153 1.779 - 1.792: 89.9982% ( 4517) 00:15:43.153 1.792 - 1.805: 95.2992% ( 892) 00:15:43.153 1.805 - 1.818: 97.3792% ( 350) 00:15:43.153 1.818 - 1.830: 97.9735% ( 100) 00:15:43.153 1.830 - 1.843: 98.3657% ( 66) 00:15:43.153 1.843 - 1.856: 98.8471% ( 81) 00:15:43.153 1.856 - 1.869: 99.1502% ( 51) 00:15:43.153 1.869 - 1.882: 99.1977% ( 8) 00:15:43.153 1.882 - 1.894: 99.2334% ( 6) 00:15:43.153 1.894 - 1.907: 99.2571% ( 4) 00:15:43.153 1.907 - 1.920: 99.2809% ( 4) 00:15:43.153 1.920 - 1.933: 99.2869% ( 1) 00:15:43.153 1.946 - 1.958: 99.2928% ( 1) 00:15:43.153 1.958 - 1.971: 99.3047% ( 2) 00:15:43.153 2.010 - 2.022: 99.3106% ( 1) 00:15:43.153 2.022 - 2.035: 99.3225% ( 2) 00:15:43.153 2.061 - 2.074: 99.3285% ( 1) 00:15:43.153 3.942 - 3.968: 99.3403% ( 2) 00:15:43.153 3.968 - 3.994: 99.3463% ( 1) 00:15:43.153 4.147 - 4.173: 99.3522% ( 1) 00:15:43.153 4.224 - 4.250: 99.3582% ( 1) 00:15:43.153 4.403 - 4.429: 99.3641% ( 1) 00:15:43.153 4.429 - 4.454: 99.3701% ( 1) 00:15:43.153 4.506 - 4.531: 99.3760% ( 1) 00:15:43.153 4.582 - 4.608: 99.3819% ( 1) 00:15:43.153 4.787 - 4.813: 99.3879% ( 1) 00:15:43.153 4.864 - 4.890: 99.3938% ( 1) 00:15:43.153 4.915 - 4.941: 99.3998% ( 1) 00:15:43.153 5.069 - 5.094: 99.4057% ( 1) 00:15:43.153 5.120 - 5.146: 99.4117% ( 1) 00:15:43.153 5.171 - 5.197: 99.4176% ( 1) 00:15:43.153 5.197 - 5.222: 99.4235% ( 1) 00:15:43.153 5.248 - 5.274: 99.4295% ( 1) 00:15:43.153 5.274 - 5.299: 99.4354% ( 1) 00:15:43.153 5.376 - 5.402: 99.4414% ( 1) 00:15:43.153 5.427 - 5.453: 99.4473% ( 1) 00:15:43.153 5.504 - 5.530: 99.4533% ( 1) 00:15:43.153 5.555 - 5.581: 99.4592% ( 1) 00:15:43.153 5.581 - 5.606: 99.4651% ( 1) 00:15:43.153 5.811 - 5.837: 99.4711% ( 1) 00:15:43.153 5.862 - 5.888: 99.4770% ( 1) 00:15:43.153 6.374 - 6.400: 99.4830% ( 1) 00:15:43.153 8.141 - 8.192: 99.4889% ( 1) 00:15:43.153 8.960 - 9.011: 99.4949% ( 1) 00:15:43.153 3407.872 - 3434.086: 99.5008% ( 1) 00:15:43.153 3853.517 - 3879.731: 99.5067% ( 1) 00:15:43.153 3984.589 - 4010.803: 99.9881% ( 81) 00:15:43.153 4980.736 - 5006.950: 100.0000% ( 2) 00:15:43.153 00:15:43.153 22:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:43.153 22:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:43.153 22:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:43.153 22:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:43.153 22:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:43.153 [ 00:15:43.153 { 00:15:43.153 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:43.153 "subtype": "Discovery", 00:15:43.153 "listen_addresses": [], 00:15:43.153 "allow_any_host": true, 00:15:43.153 "hosts": [] 00:15:43.153 }, 00:15:43.153 { 00:15:43.153 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:43.153 "subtype": "NVMe", 00:15:43.153 "listen_addresses": [ 00:15:43.153 { 00:15:43.153 "trtype": "VFIOUSER", 00:15:43.153 "adrfam": "IPv4", 00:15:43.153 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:43.153 "trsvcid": "0" 00:15:43.153 } 00:15:43.153 ], 00:15:43.153 "allow_any_host": true, 00:15:43.153 "hosts": [], 00:15:43.153 "serial_number": "SPDK1", 00:15:43.153 "model_number": "SPDK bdev Controller", 00:15:43.153 "max_namespaces": 32, 00:15:43.153 "min_cntlid": 1, 00:15:43.153 "max_cntlid": 65519, 00:15:43.153 "namespaces": [ 00:15:43.153 { 00:15:43.153 "nsid": 1, 00:15:43.153 "bdev_name": "Malloc1", 00:15:43.153 "name": "Malloc1", 00:15:43.153 "nguid": "81DC1E028DB34AC98682C8A84ADAD321", 00:15:43.153 "uuid": "81dc1e02-8db3-4ac9-8682-c8a84adad321" 00:15:43.153 }, 00:15:43.153 { 00:15:43.153 "nsid": 2, 00:15:43.153 "bdev_name": "Malloc3", 00:15:43.153 "name": "Malloc3", 00:15:43.153 "nguid": "5C23C33D3A0A41E9AD02ED75DD7E2925", 00:15:43.153 "uuid": "5c23c33d-3a0a-41e9-ad02-ed75dd7e2925" 00:15:43.153 } 00:15:43.153 ] 00:15:43.153 }, 00:15:43.153 { 00:15:43.153 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:43.153 "subtype": "NVMe", 00:15:43.153 "listen_addresses": [ 00:15:43.153 { 00:15:43.153 "trtype": "VFIOUSER", 00:15:43.153 "adrfam": "IPv4", 00:15:43.153 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:43.153 "trsvcid": "0" 00:15:43.153 } 00:15:43.153 ], 00:15:43.153 "allow_any_host": true, 00:15:43.153 "hosts": [], 00:15:43.154 "serial_number": "SPDK2", 00:15:43.154 "model_number": "SPDK bdev Controller", 00:15:43.154 "max_namespaces": 32, 00:15:43.154 "min_cntlid": 1, 00:15:43.154 "max_cntlid": 65519, 00:15:43.154 "namespaces": [ 00:15:43.154 { 00:15:43.154 "nsid": 1, 00:15:43.154 "bdev_name": "Malloc2", 00:15:43.154 "name": "Malloc2", 00:15:43.154 "nguid": "08C257ADCFC343579CD1A6738892586D", 00:15:43.154 "uuid": "08c257ad-cfc3-4357-9cd1-a6738892586d" 00:15:43.154 } 00:15:43.154 ] 00:15:43.154 } 00:15:43.154 ] 00:15:43.154 22:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:43.154 22:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2671370 00:15:43.154 22:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:43.154 22:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:43.154 22:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:43.154 22:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:43.154 22:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:43.154 22:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:43.154 22:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:43.154 22:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:43.154 EAL: No free 2048 kB hugepages reported on node 1 00:15:43.413 [2024-07-24 22:03:22.461148] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:43.413 Malloc4 00:15:43.413 22:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:43.672 [2024-07-24 22:03:22.647488] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:43.672 22:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:43.672 Asynchronous Event Request test 00:15:43.672 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:43.672 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:43.672 Registering asynchronous event callbacks... 00:15:43.672 Starting namespace attribute notice tests for all controllers... 00:15:43.672 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:43.672 aer_cb - Changed Namespace 00:15:43.672 Cleaning up... 00:15:43.672 [ 00:15:43.672 { 00:15:43.672 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:43.672 "subtype": "Discovery", 00:15:43.672 "listen_addresses": [], 00:15:43.672 "allow_any_host": true, 00:15:43.672 "hosts": [] 00:15:43.672 }, 00:15:43.672 { 00:15:43.672 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:43.672 "subtype": "NVMe", 00:15:43.672 "listen_addresses": [ 00:15:43.672 { 00:15:43.672 "trtype": "VFIOUSER", 00:15:43.672 "adrfam": "IPv4", 00:15:43.672 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:43.672 "trsvcid": "0" 00:15:43.672 } 00:15:43.672 ], 00:15:43.672 "allow_any_host": true, 00:15:43.672 "hosts": [], 00:15:43.672 "serial_number": "SPDK1", 00:15:43.672 "model_number": "SPDK bdev Controller", 00:15:43.672 "max_namespaces": 32, 00:15:43.672 "min_cntlid": 1, 00:15:43.672 "max_cntlid": 65519, 00:15:43.672 "namespaces": [ 00:15:43.672 { 00:15:43.672 "nsid": 1, 00:15:43.672 "bdev_name": "Malloc1", 00:15:43.672 "name": "Malloc1", 00:15:43.673 "nguid": "81DC1E028DB34AC98682C8A84ADAD321", 00:15:43.673 "uuid": "81dc1e02-8db3-4ac9-8682-c8a84adad321" 00:15:43.673 }, 00:15:43.673 { 00:15:43.673 "nsid": 2, 00:15:43.673 "bdev_name": "Malloc3", 00:15:43.673 "name": "Malloc3", 00:15:43.673 "nguid": "5C23C33D3A0A41E9AD02ED75DD7E2925", 00:15:43.673 "uuid": "5c23c33d-3a0a-41e9-ad02-ed75dd7e2925" 00:15:43.673 } 00:15:43.673 ] 00:15:43.673 }, 00:15:43.673 { 00:15:43.673 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:43.673 "subtype": "NVMe", 00:15:43.673 "listen_addresses": [ 00:15:43.673 { 00:15:43.673 "trtype": "VFIOUSER", 00:15:43.673 "adrfam": "IPv4", 00:15:43.673 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:43.673 "trsvcid": "0" 00:15:43.673 } 00:15:43.673 ], 00:15:43.673 "allow_any_host": true, 00:15:43.673 "hosts": [], 00:15:43.673 "serial_number": "SPDK2", 00:15:43.673 "model_number": "SPDK bdev Controller", 00:15:43.673 "max_namespaces": 32, 00:15:43.673 "min_cntlid": 1, 00:15:43.673 "max_cntlid": 65519, 00:15:43.673 "namespaces": [ 00:15:43.673 { 00:15:43.673 "nsid": 1, 00:15:43.673 "bdev_name": "Malloc2", 00:15:43.673 "name": "Malloc2", 00:15:43.673 "nguid": "08C257ADCFC343579CD1A6738892586D", 00:15:43.673 "uuid": "08c257ad-cfc3-4357-9cd1-a6738892586d" 00:15:43.673 }, 00:15:43.673 { 00:15:43.673 "nsid": 2, 00:15:43.673 "bdev_name": "Malloc4", 00:15:43.673 "name": "Malloc4", 00:15:43.673 "nguid": "13FBEF8464814D0D86FC61712FF35E72", 00:15:43.673 "uuid": "13fbef84-6481-4d0d-86fc-61712ff35e72" 00:15:43.673 } 00:15:43.673 ] 00:15:43.673 } 00:15:43.673 ] 00:15:43.673 22:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2671370 00:15:43.673 22:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:43.673 22:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2662887 00:15:43.673 22:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 2662887 ']' 00:15:43.673 22:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 2662887 00:15:43.673 22:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:43.673 22:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:43.673 22:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2662887 00:15:43.932 22:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:43.932 22:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:43.932 22:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2662887' 00:15:43.932 killing process with pid 2662887 00:15:43.932 22:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 2662887 00:15:43.932 22:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 2662887 00:15:44.192 22:03:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:44.192 22:03:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:44.192 22:03:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:44.192 22:03:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:44.192 22:03:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:44.192 22:03:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:44.192 22:03:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2671390 00:15:44.192 22:03:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2671390' 00:15:44.192 Process pid: 2671390 00:15:44.192 22:03:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:44.192 22:03:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2671390 00:15:44.192 22:03:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 2671390 ']' 00:15:44.192 22:03:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.192 22:03:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:44.192 22:03:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.192 22:03:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:44.192 22:03:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:44.192 [2024-07-24 22:03:23.215889] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:44.192 [2024-07-24 22:03:23.216821] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:15:44.192 [2024-07-24 22:03:23.216862] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:44.192 EAL: No free 2048 kB hugepages reported on node 1 00:15:44.192 [2024-07-24 22:03:23.288010] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:44.192 [2024-07-24 22:03:23.358398] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:44.192 [2024-07-24 22:03:23.358440] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:44.192 [2024-07-24 22:03:23.358449] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:44.192 [2024-07-24 22:03:23.358458] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:44.192 [2024-07-24 22:03:23.358466] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:44.192 [2024-07-24 22:03:23.358521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:44.192 [2024-07-24 22:03:23.358619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:44.192 [2024-07-24 22:03:23.358706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:44.192 [2024-07-24 22:03:23.358708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.452 [2024-07-24 22:03:23.439265] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:44.452 [2024-07-24 22:03:23.439400] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:44.452 [2024-07-24 22:03:23.439601] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:44.452 [2024-07-24 22:03:23.439901] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:44.452 [2024-07-24 22:03:23.440106] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:45.019 22:03:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:45.019 22:03:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:15:45.019 22:03:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:45.956 22:03:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:46.216 22:03:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:46.216 22:03:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:46.216 22:03:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:46.216 22:03:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:46.216 22:03:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:46.216 Malloc1 00:15:46.216 22:03:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:46.474 22:03:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:46.732 22:03:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:46.732 22:03:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:46.732 22:03:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:46.732 22:03:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:46.990 Malloc2 00:15:46.990 22:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:47.249 22:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:47.507 22:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:47.507 22:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:47.507 22:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2671390 00:15:47.507 22:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 2671390 ']' 00:15:47.507 22:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 2671390 00:15:47.507 22:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:47.507 22:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:47.507 22:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2671390 00:15:47.767 22:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:47.767 22:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:47.767 22:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2671390' 00:15:47.767 killing process with pid 2671390 00:15:47.767 22:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 2671390 00:15:47.767 22:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 2671390 00:15:47.767 22:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:47.767 22:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:47.767 00:15:47.767 real 0m51.699s 00:15:47.767 user 3m23.452s 00:15:47.767 sys 0m4.749s 00:15:47.767 22:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:47.767 22:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:47.767 ************************************ 00:15:47.767 END TEST nvmf_vfio_user 00:15:47.767 ************************************ 00:15:48.026 22:03:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:48.026 22:03:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:48.026 22:03:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:48.026 22:03:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:48.026 ************************************ 00:15:48.026 START TEST nvmf_vfio_user_nvme_compliance 00:15:48.026 ************************************ 00:15:48.026 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:48.026 * Looking for test storage... 00:15:48.026 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:48.026 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:48.026 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:48.026 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:48.026 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:48.026 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:48.026 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:48.026 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:48.026 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:48.026 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:48.026 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:48.026 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:48.026 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:48.026 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:15:48.026 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:15:48.026 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:48.026 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:48.026 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:48.026 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:48.026 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:48.026 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:48.026 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:48.027 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:48.027 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.027 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.027 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.027 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:48.027 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.027 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:15:48.027 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:48.027 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:48.027 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:48.027 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:48.027 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:48.027 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:48.027 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:48.027 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:48.027 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:48.027 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:48.027 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:48.027 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:48.027 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:48.027 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2672244 00:15:48.027 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2672244' 00:15:48.027 Process pid: 2672244 00:15:48.027 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:48.027 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2672244 00:15:48.027 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 2672244 ']' 00:15:48.027 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.027 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:48.027 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.027 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:48.027 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:48.027 22:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:48.027 [2024-07-24 22:03:27.215813] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:15:48.027 [2024-07-24 22:03:27.215865] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:48.286 EAL: No free 2048 kB hugepages reported on node 1 00:15:48.286 [2024-07-24 22:03:27.285166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:48.286 [2024-07-24 22:03:27.359108] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:48.286 [2024-07-24 22:03:27.359144] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:48.286 [2024-07-24 22:03:27.359154] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:48.286 [2024-07-24 22:03:27.359163] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:48.286 [2024-07-24 22:03:27.359171] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:48.286 [2024-07-24 22:03:27.359217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:48.286 [2024-07-24 22:03:27.359233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:48.286 [2024-07-24 22:03:27.359235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.853 22:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:48.853 22:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:15:48.853 22:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:50.230 22:03:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:50.230 22:03:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:50.230 22:03:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:50.230 22:03:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.230 22:03:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:50.230 22:03:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.230 22:03:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:50.230 22:03:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:50.230 22:03:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.230 22:03:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:50.230 malloc0 00:15:50.230 22:03:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.230 22:03:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:50.230 22:03:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.230 22:03:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:50.230 22:03:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.230 22:03:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:50.230 22:03:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.230 22:03:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:50.230 22:03:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.230 22:03:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:50.230 22:03:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.230 22:03:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:50.230 22:03:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.230 22:03:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:50.230 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.230 00:15:50.230 00:15:50.230 CUnit - A unit testing framework for C - Version 2.1-3 00:15:50.230 http://cunit.sourceforge.net/ 00:15:50.230 00:15:50.230 00:15:50.230 Suite: nvme_compliance 00:15:50.230 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-24 22:03:29.267142] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:50.230 [2024-07-24 22:03:29.268479] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:50.230 [2024-07-24 22:03:29.268495] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:50.230 [2024-07-24 22:03:29.268503] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:50.230 [2024-07-24 22:03:29.270169] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:50.230 passed 00:15:50.230 Test: admin_identify_ctrlr_verify_fused ...[2024-07-24 22:03:29.347692] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:50.230 [2024-07-24 22:03:29.350709] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:50.230 passed 00:15:50.230 Test: admin_identify_ns ...[2024-07-24 22:03:29.429742] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:50.488 [2024-07-24 22:03:29.491727] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:50.488 [2024-07-24 22:03:29.499726] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:50.488 [2024-07-24 22:03:29.520821] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:50.488 passed 00:15:50.488 Test: admin_get_features_mandatory_features ...[2024-07-24 22:03:29.594079] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:50.488 [2024-07-24 22:03:29.597103] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:50.488 passed 00:15:50.488 Test: admin_get_features_optional_features ...[2024-07-24 22:03:29.671587] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:50.488 [2024-07-24 22:03:29.674605] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:50.488 passed 00:15:50.747 Test: admin_set_features_number_of_queues ...[2024-07-24 22:03:29.748050] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:50.747 [2024-07-24 22:03:29.853814] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:50.747 passed 00:15:50.747 Test: admin_get_log_page_mandatory_logs ...[2024-07-24 22:03:29.927255] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:50.747 [2024-07-24 22:03:29.930272] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:50.747 passed 00:15:51.005 Test: admin_get_log_page_with_lpo ...[2024-07-24 22:03:30.005706] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:51.006 [2024-07-24 22:03:30.075730] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:51.006 [2024-07-24 22:03:30.088788] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:51.006 passed 00:15:51.006 Test: fabric_property_get ...[2024-07-24 22:03:30.164276] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:51.006 [2024-07-24 22:03:30.165513] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:51.006 [2024-07-24 22:03:30.167296] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:51.006 passed 00:15:51.264 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-24 22:03:30.242810] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:51.264 [2024-07-24 22:03:30.244053] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:51.264 [2024-07-24 22:03:30.245829] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:51.264 passed 00:15:51.264 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-24 22:03:30.321261] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:51.264 [2024-07-24 22:03:30.404723] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:51.264 [2024-07-24 22:03:30.420723] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:51.264 [2024-07-24 22:03:30.425888] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:51.264 passed 00:15:51.524 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-24 22:03:30.499927] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:51.524 [2024-07-24 22:03:30.501171] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:51.524 [2024-07-24 22:03:30.502950] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:51.524 passed 00:15:51.524 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-24 22:03:30.576723] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:51.524 [2024-07-24 22:03:30.654720] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:51.524 [2024-07-24 22:03:30.678723] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:51.524 [2024-07-24 22:03:30.683806] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:51.524 passed 00:15:51.783 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-24 22:03:30.757076] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:51.783 [2024-07-24 22:03:30.758317] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:51.783 [2024-07-24 22:03:30.758342] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:51.783 [2024-07-24 22:03:30.760092] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:51.783 passed 00:15:51.783 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-24 22:03:30.834516] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:51.783 [2024-07-24 22:03:30.925732] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:51.783 [2024-07-24 22:03:30.933727] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:51.783 [2024-07-24 22:03:30.941730] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:51.783 [2024-07-24 22:03:30.949725] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:51.783 [2024-07-24 22:03:30.978801] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:52.042 passed 00:15:52.042 Test: admin_create_io_sq_verify_pc ...[2024-07-24 22:03:31.054111] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:52.042 [2024-07-24 22:03:31.070729] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:52.042 [2024-07-24 22:03:31.088282] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:52.042 passed 00:15:52.042 Test: admin_create_io_qp_max_qps ...[2024-07-24 22:03:31.161795] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:53.420 [2024-07-24 22:03:32.273727] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:53.688 [2024-07-24 22:03:32.664307] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:53.688 passed 00:15:53.688 Test: admin_create_io_sq_shared_cq ...[2024-07-24 22:03:32.738760] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:53.688 [2024-07-24 22:03:32.871721] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:53.947 [2024-07-24 22:03:32.908782] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:53.947 passed 00:15:53.947 00:15:53.947 Run Summary: Type Total Ran Passed Failed Inactive 00:15:53.947 suites 1 1 n/a 0 0 00:15:53.947 tests 18 18 18 0 0 00:15:53.947 asserts 360 360 360 0 n/a 00:15:53.947 00:15:53.947 Elapsed time = 1.500 seconds 00:15:53.947 22:03:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2672244 00:15:53.947 22:03:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 2672244 ']' 00:15:53.947 22:03:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 2672244 00:15:53.947 22:03:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:15:53.947 22:03:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:53.947 22:03:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2672244 00:15:53.947 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:53.947 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:53.947 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2672244' 00:15:53.947 killing process with pid 2672244 00:15:53.947 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 2672244 00:15:53.947 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 2672244 00:15:54.207 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:54.207 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:54.207 00:15:54.207 real 0m6.168s 00:15:54.207 user 0m17.420s 00:15:54.207 sys 0m0.702s 00:15:54.207 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:54.207 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:54.207 ************************************ 00:15:54.207 END TEST nvmf_vfio_user_nvme_compliance 00:15:54.207 ************************************ 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:54.208 ************************************ 00:15:54.208 START TEST nvmf_vfio_user_fuzz 00:15:54.208 ************************************ 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:54.208 * Looking for test storage... 00:15:54.208 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2673359 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2673359' 00:15:54.208 Process pid: 2673359 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2673359 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 2673359 ']' 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:54.208 22:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:55.146 22:03:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:55.146 22:03:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:15:55.146 22:03:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:56.084 22:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:56.084 22:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.084 22:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:56.084 22:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.084 22:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:56.084 22:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:56.084 22:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.084 22:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:56.084 malloc0 00:15:56.084 22:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.084 22:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:56.084 22:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.084 22:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:56.084 22:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.084 22:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:56.084 22:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.084 22:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:56.084 22:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.084 22:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:56.084 22:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.084 22:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:56.084 22:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.084 22:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:56.084 22:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:28.154 Fuzzing completed. Shutting down the fuzz application 00:16:28.154 00:16:28.154 Dumping successful admin opcodes: 00:16:28.154 8, 9, 10, 24, 00:16:28.154 Dumping successful io opcodes: 00:16:28.154 0, 00:16:28.154 NS: 0x200003a1ef00 I/O qp, Total commands completed: 882238, total successful commands: 3437, random_seed: 1880303040 00:16:28.154 NS: 0x200003a1ef00 admin qp, Total commands completed: 212892, total successful commands: 1713, random_seed: 1187104256 00:16:28.154 22:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:28.154 22:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.154 22:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:28.154 22:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.154 22:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2673359 00:16:28.154 22:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 2673359 ']' 00:16:28.154 22:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 2673359 00:16:28.154 22:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:16:28.154 22:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:28.154 22:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2673359 00:16:28.154 22:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:28.154 22:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:28.154 22:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2673359' 00:16:28.154 killing process with pid 2673359 00:16:28.154 22:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 2673359 00:16:28.154 22:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 2673359 00:16:28.154 22:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:28.154 00:16:28.154 real 0m32.745s 00:16:28.154 user 0m29.161s 00:16:28.154 sys 0m32.491s 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:28.154 ************************************ 00:16:28.154 END TEST nvmf_vfio_user_fuzz 00:16:28.154 ************************************ 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:28.154 ************************************ 00:16:28.154 START TEST nvmf_auth_target 00:16:28.154 ************************************ 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:28.154 * Looking for test storage... 00:16:28.154 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:28.154 22:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.713 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:34.713 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:34.713 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:34.713 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:34.713 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:34.713 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:34.713 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:34.713 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:34.713 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:34.713 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:16:34.713 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:34.713 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:16:34.713 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:34.713 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:16:34.713 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:34.713 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:34.713 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:34.713 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:34.713 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:34.713 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:34.713 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:34.713 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:34.713 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:34.713 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:34.713 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:34.713 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:34.713 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:34.713 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:34.713 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:34.713 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:34.714 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:34.714 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:34.714 Found net devices under 0000:af:00.0: cvl_0_0 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:34.714 Found net devices under 0000:af:00.1: cvl_0_1 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:34.714 22:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:34.714 22:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:34.714 22:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:34.714 22:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:34.714 22:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:34.714 22:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:34.714 22:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:34.714 22:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:34.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:34.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:16:34.714 00:16:34.714 --- 10.0.0.2 ping statistics --- 00:16:34.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.714 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:16:34.714 22:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:34.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:34.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:16:34.714 00:16:34.714 --- 10.0.0.1 ping statistics --- 00:16:34.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.714 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:16:34.714 22:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:34.714 22:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:16:34.714 22:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:34.714 22:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:34.714 22:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:34.714 22:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:34.714 22:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:34.714 22:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:34.714 22:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:34.714 22:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:16:34.714 22:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:34.714 22:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:34.714 22:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.714 22:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2681996 00:16:34.714 22:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:34.714 22:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2681996 00:16:34.714 22:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2681996 ']' 00:16:34.714 22:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.714 22:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:34.714 22:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.714 22:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:34.714 22:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.973 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:34.973 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:34.973 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:34.973 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:34.973 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.973 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:34.973 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=2682247 00:16:34.973 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:34.973 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:34.973 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:16:34.973 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:34.973 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:34.973 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:34.973 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:16:34.973 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:34.973 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:34.973 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4a547a549a692f6495eb1c690c02e83f32485ae64338f36b 00:16:34.973 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:34.973 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.9ah 00:16:34.973 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4a547a549a692f6495eb1c690c02e83f32485ae64338f36b 0 00:16:34.973 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4a547a549a692f6495eb1c690c02e83f32485ae64338f36b 0 00:16:34.973 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:34.973 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:34.973 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4a547a549a692f6495eb1c690c02e83f32485ae64338f36b 00:16:34.973 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:16:34.973 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.9ah 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.9ah 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.9ah 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f0cce6f83d085191bccbdb8b916efcfaa8eb927ae2dc1e9b551614d19a1ddca4 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.QFG 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f0cce6f83d085191bccbdb8b916efcfaa8eb927ae2dc1e9b551614d19a1ddca4 3 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f0cce6f83d085191bccbdb8b916efcfaa8eb927ae2dc1e9b551614d19a1ddca4 3 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f0cce6f83d085191bccbdb8b916efcfaa8eb927ae2dc1e9b551614d19a1ddca4 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.QFG 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.QFG 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.QFG 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b182d898e41ce3f768910318a99bc26f 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Xhd 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b182d898e41ce3f768910318a99bc26f 1 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b182d898e41ce3f768910318a99bc26f 1 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b182d898e41ce3f768910318a99bc26f 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Xhd 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Xhd 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.Xhd 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=aedb8cd07cdbe92d9ebb018e8eda003317acae8dc493edd3 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ale 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key aedb8cd07cdbe92d9ebb018e8eda003317acae8dc493edd3 2 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 aedb8cd07cdbe92d9ebb018e8eda003317acae8dc493edd3 2 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=aedb8cd07cdbe92d9ebb018e8eda003317acae8dc493edd3 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ale 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ale 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.ale 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2afc1a41862fd13a10b9fb0200befa099309bd44197297c6 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:35.232 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.GPm 00:16:35.233 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2afc1a41862fd13a10b9fb0200befa099309bd44197297c6 2 00:16:35.233 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2afc1a41862fd13a10b9fb0200befa099309bd44197297c6 2 00:16:35.233 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:35.233 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:35.233 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2afc1a41862fd13a10b9fb0200befa099309bd44197297c6 00:16:35.233 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:35.233 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.GPm 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.GPm 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.GPm 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c7bb563d28cc946f139d346109d6e2fa 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.j5r 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c7bb563d28cc946f139d346109d6e2fa 1 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c7bb563d28cc946f139d346109d6e2fa 1 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c7bb563d28cc946f139d346109d6e2fa 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.j5r 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.j5r 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.j5r 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=39c8a23ff8911aa6a1b99dc5b35a1272bf31930e74c5150456e42f379ea94467 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.6F1 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 39c8a23ff8911aa6a1b99dc5b35a1272bf31930e74c5150456e42f379ea94467 3 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 39c8a23ff8911aa6a1b99dc5b35a1272bf31930e74c5150456e42f379ea94467 3 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=39c8a23ff8911aa6a1b99dc5b35a1272bf31930e74c5150456e42f379ea94467 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.6F1 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.6F1 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.6F1 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 2681996 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2681996 ']' 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:35.491 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.749 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:35.749 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:35.749 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 2682247 /var/tmp/host.sock 00:16:35.749 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2682247 ']' 00:16:35.749 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:16:35.749 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:35.749 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:35.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:35.749 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:35.749 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.749 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:35.749 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:35.749 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:16:35.749 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.749 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.749 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.749 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:35.749 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.9ah 00:16:35.749 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.749 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.007 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.007 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.9ah 00:16:36.007 22:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.9ah 00:16:36.007 22:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.QFG ]] 00:16:36.007 22:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.QFG 00:16:36.007 22:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.007 22:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.007 22:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.007 22:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.QFG 00:16:36.007 22:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.QFG 00:16:36.265 22:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:36.265 22:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Xhd 00:16:36.265 22:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.265 22:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.265 22:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.265 22:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Xhd 00:16:36.265 22:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Xhd 00:16:36.522 22:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.ale ]] 00:16:36.522 22:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ale 00:16:36.522 22:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.522 22:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.522 22:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.522 22:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ale 00:16:36.522 22:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ale 00:16:36.522 22:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:36.522 22:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.GPm 00:16:36.522 22:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.522 22:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.522 22:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.522 22:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.GPm 00:16:36.522 22:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.GPm 00:16:36.786 22:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.j5r ]] 00:16:36.786 22:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.j5r 00:16:36.786 22:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.786 22:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.786 22:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.786 22:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.j5r 00:16:36.786 22:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.j5r 00:16:37.074 22:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:37.074 22:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.6F1 00:16:37.074 22:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.074 22:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.074 22:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.074 22:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.6F1 00:16:37.074 22:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.6F1 00:16:37.074 22:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:16:37.074 22:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:37.074 22:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:37.074 22:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:37.074 22:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:37.074 22:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:37.332 22:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:16:37.332 22:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:37.332 22:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:37.332 22:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:37.332 22:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:37.332 22:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.332 22:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.332 22:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.332 22:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.332 22:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.332 22:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.332 22:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.590 00:16:37.590 22:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:37.590 22:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.590 22:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:37.848 22:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.848 22:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.848 22:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.848 22:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.848 22:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.848 22:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:37.848 { 00:16:37.848 "cntlid": 1, 00:16:37.848 "qid": 0, 00:16:37.848 "state": "enabled", 00:16:37.848 "thread": "nvmf_tgt_poll_group_000", 00:16:37.848 "listen_address": { 00:16:37.848 "trtype": "TCP", 00:16:37.848 "adrfam": "IPv4", 00:16:37.848 "traddr": "10.0.0.2", 00:16:37.848 "trsvcid": "4420" 00:16:37.848 }, 00:16:37.848 "peer_address": { 00:16:37.848 "trtype": "TCP", 00:16:37.848 "adrfam": "IPv4", 00:16:37.848 "traddr": "10.0.0.1", 00:16:37.848 "trsvcid": "51808" 00:16:37.848 }, 00:16:37.848 "auth": { 00:16:37.848 "state": "completed", 00:16:37.848 "digest": "sha256", 00:16:37.848 "dhgroup": "null" 00:16:37.848 } 00:16:37.848 } 00:16:37.848 ]' 00:16:37.848 22:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:37.848 22:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:37.848 22:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:37.848 22:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:37.848 22:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:37.848 22:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.848 22:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.848 22:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.105 22:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NGE1NDdhNTQ5YTY5MmY2NDk1ZWIxYzY5MGMwMmU4M2YzMjQ4NWFlNjQzMzhmMzZi+WYYww==: --dhchap-ctrl-secret DHHC-1:03:ZjBjY2U2ZjgzZDA4NTE5MWJjY2JkYjhiOTE2ZWZjZmFhOGViOTI3YWUyZGMxZTliNTUxNjE0ZDE5YTFkZGNhNOKYb+M=: 00:16:38.671 22:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.671 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.671 22:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:38.671 22:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.671 22:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.671 22:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.671 22:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:38.671 22:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:38.671 22:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:38.671 22:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:16:38.671 22:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:38.671 22:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:38.671 22:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:38.671 22:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:38.671 22:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.671 22:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.671 22:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.671 22:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.671 22:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.671 22:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.671 22:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.928 00:16:38.928 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:38.928 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:38.928 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.186 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.186 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.186 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.186 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.186 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.186 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:39.186 { 00:16:39.186 "cntlid": 3, 00:16:39.186 "qid": 0, 00:16:39.186 "state": "enabled", 00:16:39.186 "thread": "nvmf_tgt_poll_group_000", 00:16:39.186 "listen_address": { 00:16:39.186 "trtype": "TCP", 00:16:39.186 "adrfam": "IPv4", 00:16:39.186 "traddr": "10.0.0.2", 00:16:39.186 "trsvcid": "4420" 00:16:39.186 }, 00:16:39.186 "peer_address": { 00:16:39.186 "trtype": "TCP", 00:16:39.186 "adrfam": "IPv4", 00:16:39.186 "traddr": "10.0.0.1", 00:16:39.186 "trsvcid": "51844" 00:16:39.186 }, 00:16:39.186 "auth": { 00:16:39.186 "state": "completed", 00:16:39.186 "digest": "sha256", 00:16:39.186 "dhgroup": "null" 00:16:39.186 } 00:16:39.186 } 00:16:39.186 ]' 00:16:39.186 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:39.186 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:39.186 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:39.186 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:39.186 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:39.444 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.444 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.444 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.444 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YjE4MmQ4OThlNDFjZTNmNzY4OTEwMzE4YTk5YmMyNmaABYDs: --dhchap-ctrl-secret DHHC-1:02:YWVkYjhjZDA3Y2RiZTkyZDllYmIwMThlOGVkYTAwMzMxN2FjYWU4ZGM0OTNlZGQzFo0H9Q==: 00:16:40.009 22:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.009 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.009 22:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:40.009 22:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.009 22:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.009 22:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.009 22:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:40.009 22:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:40.009 22:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:40.267 22:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:16:40.267 22:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:40.267 22:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:40.267 22:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:40.267 22:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:40.267 22:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.267 22:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.267 22:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.267 22:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.267 22:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.267 22:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.267 22:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.524 00:16:40.524 22:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:40.524 22:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:40.524 22:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.524 22:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.524 22:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.524 22:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.524 22:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.781 22:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.781 22:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:40.781 { 00:16:40.781 "cntlid": 5, 00:16:40.781 "qid": 0, 00:16:40.781 "state": "enabled", 00:16:40.781 "thread": "nvmf_tgt_poll_group_000", 00:16:40.781 "listen_address": { 00:16:40.781 "trtype": "TCP", 00:16:40.781 "adrfam": "IPv4", 00:16:40.781 "traddr": "10.0.0.2", 00:16:40.781 "trsvcid": "4420" 00:16:40.781 }, 00:16:40.781 "peer_address": { 00:16:40.781 "trtype": "TCP", 00:16:40.781 "adrfam": "IPv4", 00:16:40.781 "traddr": "10.0.0.1", 00:16:40.781 "trsvcid": "51854" 00:16:40.781 }, 00:16:40.781 "auth": { 00:16:40.781 "state": "completed", 00:16:40.781 "digest": "sha256", 00:16:40.781 "dhgroup": "null" 00:16:40.781 } 00:16:40.781 } 00:16:40.781 ]' 00:16:40.781 22:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:40.781 22:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:40.782 22:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:40.782 22:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:40.782 22:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:40.782 22:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.782 22:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.782 22:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.039 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmFmYzFhNDE4NjJmZDEzYTEwYjlmYjAyMDBiZWZhMDk5MzA5YmQ0NDE5NzI5N2M2vB0mHA==: --dhchap-ctrl-secret DHHC-1:01:YzdiYjU2M2QyOGNjOTQ2ZjEzOWQzNDYxMDlkNmUyZmEIwIts: 00:16:41.604 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.604 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:41.604 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.604 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.604 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.604 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:41.604 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:41.604 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:41.604 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:16:41.604 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:41.604 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:41.604 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:41.604 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:41.604 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.604 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:16:41.604 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.604 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.861 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.861 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:41.861 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:41.861 00:16:41.861 22:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:41.861 22:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:41.861 22:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.119 22:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.119 22:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.119 22:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.119 22:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.119 22:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.119 22:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:42.119 { 00:16:42.119 "cntlid": 7, 00:16:42.119 "qid": 0, 00:16:42.119 "state": "enabled", 00:16:42.119 "thread": "nvmf_tgt_poll_group_000", 00:16:42.119 "listen_address": { 00:16:42.119 "trtype": "TCP", 00:16:42.119 "adrfam": "IPv4", 00:16:42.119 "traddr": "10.0.0.2", 00:16:42.119 "trsvcid": "4420" 00:16:42.119 }, 00:16:42.119 "peer_address": { 00:16:42.119 "trtype": "TCP", 00:16:42.119 "adrfam": "IPv4", 00:16:42.119 "traddr": "10.0.0.1", 00:16:42.119 "trsvcid": "51880" 00:16:42.119 }, 00:16:42.119 "auth": { 00:16:42.119 "state": "completed", 00:16:42.119 "digest": "sha256", 00:16:42.119 "dhgroup": "null" 00:16:42.119 } 00:16:42.119 } 00:16:42.119 ]' 00:16:42.119 22:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:42.119 22:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:42.119 22:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:42.119 22:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:42.119 22:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:42.376 22:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.376 22:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.376 22:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.376 22:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MzljOGEyM2ZmODkxMWFhNmExYjk5ZGM1YjM1YTEyNzJiZjMxOTMwZTc0YzUxNTA0NTZlNDJmMzc5ZWE5NDQ2N4vraJk=: 00:16:42.940 22:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.940 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.940 22:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:42.940 22:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.940 22:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.940 22:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.940 22:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:42.940 22:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:42.940 22:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:42.940 22:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:43.198 22:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:16:43.198 22:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:43.198 22:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:43.198 22:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:43.198 22:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:43.198 22:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.198 22:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.198 22:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.198 22:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.198 22:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.198 22:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.198 22:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.455 00:16:43.455 22:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:43.455 22:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.455 22:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:43.713 22:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.713 22:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.713 22:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.713 22:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.713 22:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.713 22:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:43.713 { 00:16:43.713 "cntlid": 9, 00:16:43.713 "qid": 0, 00:16:43.713 "state": "enabled", 00:16:43.713 "thread": "nvmf_tgt_poll_group_000", 00:16:43.713 "listen_address": { 00:16:43.713 "trtype": "TCP", 00:16:43.713 "adrfam": "IPv4", 00:16:43.713 "traddr": "10.0.0.2", 00:16:43.713 "trsvcid": "4420" 00:16:43.713 }, 00:16:43.713 "peer_address": { 00:16:43.713 "trtype": "TCP", 00:16:43.713 "adrfam": "IPv4", 00:16:43.713 "traddr": "10.0.0.1", 00:16:43.713 "trsvcid": "51914" 00:16:43.713 }, 00:16:43.713 "auth": { 00:16:43.713 "state": "completed", 00:16:43.713 "digest": "sha256", 00:16:43.713 "dhgroup": "ffdhe2048" 00:16:43.713 } 00:16:43.713 } 00:16:43.713 ]' 00:16:43.713 22:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:43.713 22:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:43.713 22:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:43.713 22:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:43.713 22:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:43.713 22:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.713 22:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.713 22:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.970 22:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NGE1NDdhNTQ5YTY5MmY2NDk1ZWIxYzY5MGMwMmU4M2YzMjQ4NWFlNjQzMzhmMzZi+WYYww==: --dhchap-ctrl-secret DHHC-1:03:ZjBjY2U2ZjgzZDA4NTE5MWJjY2JkYjhiOTE2ZWZjZmFhOGViOTI3YWUyZGMxZTliNTUxNjE0ZDE5YTFkZGNhNOKYb+M=: 00:16:44.535 22:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.535 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.535 22:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:44.535 22:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.535 22:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.535 22:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.535 22:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:44.535 22:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:44.535 22:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:44.535 22:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:16:44.535 22:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:44.535 22:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:44.535 22:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:44.535 22:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:44.535 22:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.535 22:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.535 22:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.535 22:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.535 22:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.535 22:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.535 22:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.793 00:16:44.793 22:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:44.793 22:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:44.793 22:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.051 22:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.051 22:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.051 22:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.051 22:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.051 22:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.051 22:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:45.051 { 00:16:45.051 "cntlid": 11, 00:16:45.051 "qid": 0, 00:16:45.051 "state": "enabled", 00:16:45.051 "thread": "nvmf_tgt_poll_group_000", 00:16:45.051 "listen_address": { 00:16:45.051 "trtype": "TCP", 00:16:45.051 "adrfam": "IPv4", 00:16:45.051 "traddr": "10.0.0.2", 00:16:45.051 "trsvcid": "4420" 00:16:45.051 }, 00:16:45.051 "peer_address": { 00:16:45.051 "trtype": "TCP", 00:16:45.051 "adrfam": "IPv4", 00:16:45.051 "traddr": "10.0.0.1", 00:16:45.051 "trsvcid": "51952" 00:16:45.051 }, 00:16:45.051 "auth": { 00:16:45.051 "state": "completed", 00:16:45.051 "digest": "sha256", 00:16:45.051 "dhgroup": "ffdhe2048" 00:16:45.051 } 00:16:45.051 } 00:16:45.051 ]' 00:16:45.051 22:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:45.051 22:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:45.051 22:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:45.309 22:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:45.309 22:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:45.309 22:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.309 22:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.309 22:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.309 22:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YjE4MmQ4OThlNDFjZTNmNzY4OTEwMzE4YTk5YmMyNmaABYDs: --dhchap-ctrl-secret DHHC-1:02:YWVkYjhjZDA3Y2RiZTkyZDllYmIwMThlOGVkYTAwMzMxN2FjYWU4ZGM0OTNlZGQzFo0H9Q==: 00:16:45.875 22:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.875 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.875 22:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:45.875 22:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.875 22:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.875 22:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.875 22:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:45.875 22:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:45.875 22:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:46.133 22:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:16:46.133 22:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:46.133 22:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:46.133 22:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:46.133 22:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:46.133 22:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.133 22:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.133 22:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.133 22:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.133 22:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.133 22:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.133 22:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.391 00:16:46.391 22:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:46.391 22:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:46.391 22:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.649 22:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.649 22:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.649 22:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.649 22:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.649 22:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.649 22:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:46.649 { 00:16:46.649 "cntlid": 13, 00:16:46.649 "qid": 0, 00:16:46.649 "state": "enabled", 00:16:46.649 "thread": "nvmf_tgt_poll_group_000", 00:16:46.649 "listen_address": { 00:16:46.649 "trtype": "TCP", 00:16:46.649 "adrfam": "IPv4", 00:16:46.649 "traddr": "10.0.0.2", 00:16:46.649 "trsvcid": "4420" 00:16:46.649 }, 00:16:46.649 "peer_address": { 00:16:46.649 "trtype": "TCP", 00:16:46.649 "adrfam": "IPv4", 00:16:46.649 "traddr": "10.0.0.1", 00:16:46.649 "trsvcid": "51982" 00:16:46.649 }, 00:16:46.649 "auth": { 00:16:46.649 "state": "completed", 00:16:46.649 "digest": "sha256", 00:16:46.649 "dhgroup": "ffdhe2048" 00:16:46.649 } 00:16:46.649 } 00:16:46.649 ]' 00:16:46.649 22:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:46.649 22:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:46.649 22:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:46.649 22:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:46.649 22:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:46.649 22:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.649 22:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.649 22:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.907 22:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmFmYzFhNDE4NjJmZDEzYTEwYjlmYjAyMDBiZWZhMDk5MzA5YmQ0NDE5NzI5N2M2vB0mHA==: --dhchap-ctrl-secret DHHC-1:01:YzdiYjU2M2QyOGNjOTQ2ZjEzOWQzNDYxMDlkNmUyZmEIwIts: 00:16:47.473 22:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.473 22:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:47.473 22:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.473 22:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.473 22:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.473 22:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:47.473 22:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:47.473 22:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:47.473 22:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:16:47.473 22:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:47.473 22:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:47.473 22:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:47.473 22:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:47.473 22:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.473 22:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:16:47.473 22:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.473 22:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.473 22:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.473 22:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:47.473 22:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:47.732 00:16:47.732 22:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:47.732 22:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:47.732 22:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.991 22:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.991 22:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.991 22:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.991 22:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.991 22:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.991 22:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:47.991 { 00:16:47.991 "cntlid": 15, 00:16:47.991 "qid": 0, 00:16:47.991 "state": "enabled", 00:16:47.991 "thread": "nvmf_tgt_poll_group_000", 00:16:47.991 "listen_address": { 00:16:47.991 "trtype": "TCP", 00:16:47.991 "adrfam": "IPv4", 00:16:47.991 "traddr": "10.0.0.2", 00:16:47.991 "trsvcid": "4420" 00:16:47.991 }, 00:16:47.991 "peer_address": { 00:16:47.991 "trtype": "TCP", 00:16:47.991 "adrfam": "IPv4", 00:16:47.991 "traddr": "10.0.0.1", 00:16:47.991 "trsvcid": "33356" 00:16:47.991 }, 00:16:47.991 "auth": { 00:16:47.991 "state": "completed", 00:16:47.991 "digest": "sha256", 00:16:47.991 "dhgroup": "ffdhe2048" 00:16:47.991 } 00:16:47.991 } 00:16:47.991 ]' 00:16:47.991 22:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:47.991 22:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:47.991 22:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:47.991 22:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:47.991 22:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:48.249 22:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.249 22:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.249 22:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.249 22:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MzljOGEyM2ZmODkxMWFhNmExYjk5ZGM1YjM1YTEyNzJiZjMxOTMwZTc0YzUxNTA0NTZlNDJmMzc5ZWE5NDQ2N4vraJk=: 00:16:48.851 22:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.851 22:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:48.851 22:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.851 22:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.851 22:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.851 22:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:48.851 22:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:48.851 22:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:48.851 22:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:49.110 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:16:49.110 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:49.110 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:49.110 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:49.110 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:49.110 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.110 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.110 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.110 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.110 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.110 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.110 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.368 00:16:49.368 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:49.368 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.368 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:49.368 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.368 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.368 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.368 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.368 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.368 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:49.368 { 00:16:49.368 "cntlid": 17, 00:16:49.368 "qid": 0, 00:16:49.368 "state": "enabled", 00:16:49.368 "thread": "nvmf_tgt_poll_group_000", 00:16:49.368 "listen_address": { 00:16:49.368 "trtype": "TCP", 00:16:49.368 "adrfam": "IPv4", 00:16:49.368 "traddr": "10.0.0.2", 00:16:49.368 "trsvcid": "4420" 00:16:49.368 }, 00:16:49.368 "peer_address": { 00:16:49.368 "trtype": "TCP", 00:16:49.368 "adrfam": "IPv4", 00:16:49.368 "traddr": "10.0.0.1", 00:16:49.368 "trsvcid": "33380" 00:16:49.368 }, 00:16:49.368 "auth": { 00:16:49.368 "state": "completed", 00:16:49.368 "digest": "sha256", 00:16:49.368 "dhgroup": "ffdhe3072" 00:16:49.368 } 00:16:49.368 } 00:16:49.368 ]' 00:16:49.368 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:49.626 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.626 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:49.626 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:49.626 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:49.626 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.626 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.626 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.883 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NGE1NDdhNTQ5YTY5MmY2NDk1ZWIxYzY5MGMwMmU4M2YzMjQ4NWFlNjQzMzhmMzZi+WYYww==: --dhchap-ctrl-secret DHHC-1:03:ZjBjY2U2ZjgzZDA4NTE5MWJjY2JkYjhiOTE2ZWZjZmFhOGViOTI3YWUyZGMxZTliNTUxNjE0ZDE5YTFkZGNhNOKYb+M=: 00:16:50.449 22:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.449 22:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:50.449 22:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.449 22:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.449 22:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.449 22:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:50.449 22:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:50.449 22:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:50.449 22:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:16:50.449 22:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:50.449 22:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:50.449 22:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:50.449 22:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:50.449 22:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.449 22:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.449 22:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.449 22:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.449 22:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.449 22:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.449 22:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.707 00:16:50.707 22:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:50.707 22:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:50.707 22:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.965 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.965 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.965 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.965 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.965 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.965 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:50.965 { 00:16:50.965 "cntlid": 19, 00:16:50.965 "qid": 0, 00:16:50.965 "state": "enabled", 00:16:50.965 "thread": "nvmf_tgt_poll_group_000", 00:16:50.965 "listen_address": { 00:16:50.965 "trtype": "TCP", 00:16:50.965 "adrfam": "IPv4", 00:16:50.965 "traddr": "10.0.0.2", 00:16:50.965 "trsvcid": "4420" 00:16:50.965 }, 00:16:50.965 "peer_address": { 00:16:50.965 "trtype": "TCP", 00:16:50.965 "adrfam": "IPv4", 00:16:50.965 "traddr": "10.0.0.1", 00:16:50.965 "trsvcid": "33402" 00:16:50.965 }, 00:16:50.965 "auth": { 00:16:50.965 "state": "completed", 00:16:50.965 "digest": "sha256", 00:16:50.965 "dhgroup": "ffdhe3072" 00:16:50.965 } 00:16:50.965 } 00:16:50.965 ]' 00:16:50.965 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:50.965 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:50.965 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:50.966 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:50.966 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:50.966 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.966 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.966 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.223 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YjE4MmQ4OThlNDFjZTNmNzY4OTEwMzE4YTk5YmMyNmaABYDs: --dhchap-ctrl-secret DHHC-1:02:YWVkYjhjZDA3Y2RiZTkyZDllYmIwMThlOGVkYTAwMzMxN2FjYWU4ZGM0OTNlZGQzFo0H9Q==: 00:16:51.798 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.798 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:51.798 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.798 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.798 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.798 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:51.798 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:51.798 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:52.056 22:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:16:52.056 22:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:52.056 22:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:52.056 22:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:52.056 22:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:52.056 22:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.056 22:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.056 22:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.056 22:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.056 22:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.056 22:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.056 22:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.315 00:16:52.315 22:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:52.315 22:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:52.315 22:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.315 22:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.315 22:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.315 22:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.315 22:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.315 22:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.315 22:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:52.315 { 00:16:52.315 "cntlid": 21, 00:16:52.315 "qid": 0, 00:16:52.315 "state": "enabled", 00:16:52.315 "thread": "nvmf_tgt_poll_group_000", 00:16:52.315 "listen_address": { 00:16:52.315 "trtype": "TCP", 00:16:52.315 "adrfam": "IPv4", 00:16:52.315 "traddr": "10.0.0.2", 00:16:52.315 "trsvcid": "4420" 00:16:52.315 }, 00:16:52.315 "peer_address": { 00:16:52.315 "trtype": "TCP", 00:16:52.315 "adrfam": "IPv4", 00:16:52.315 "traddr": "10.0.0.1", 00:16:52.315 "trsvcid": "33434" 00:16:52.315 }, 00:16:52.315 "auth": { 00:16:52.315 "state": "completed", 00:16:52.315 "digest": "sha256", 00:16:52.315 "dhgroup": "ffdhe3072" 00:16:52.315 } 00:16:52.315 } 00:16:52.315 ]' 00:16:52.315 22:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:52.573 22:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:52.573 22:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:52.573 22:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:52.573 22:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:52.573 22:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.573 22:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.573 22:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.831 22:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmFmYzFhNDE4NjJmZDEzYTEwYjlmYjAyMDBiZWZhMDk5MzA5YmQ0NDE5NzI5N2M2vB0mHA==: --dhchap-ctrl-secret DHHC-1:01:YzdiYjU2M2QyOGNjOTQ2ZjEzOWQzNDYxMDlkNmUyZmEIwIts: 00:16:53.397 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.397 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:53.397 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.397 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.397 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.397 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:53.397 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:53.397 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:53.397 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:16:53.397 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:53.397 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:53.397 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:53.397 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:53.397 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.397 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:16:53.397 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.397 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.397 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.397 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:53.397 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:53.655 00:16:53.655 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:53.655 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:53.655 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.914 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.914 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.914 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.914 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.914 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.914 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:53.914 { 00:16:53.914 "cntlid": 23, 00:16:53.914 "qid": 0, 00:16:53.914 "state": "enabled", 00:16:53.914 "thread": "nvmf_tgt_poll_group_000", 00:16:53.914 "listen_address": { 00:16:53.914 "trtype": "TCP", 00:16:53.914 "adrfam": "IPv4", 00:16:53.914 "traddr": "10.0.0.2", 00:16:53.914 "trsvcid": "4420" 00:16:53.914 }, 00:16:53.914 "peer_address": { 00:16:53.914 "trtype": "TCP", 00:16:53.914 "adrfam": "IPv4", 00:16:53.914 "traddr": "10.0.0.1", 00:16:53.914 "trsvcid": "33454" 00:16:53.914 }, 00:16:53.914 "auth": { 00:16:53.914 "state": "completed", 00:16:53.914 "digest": "sha256", 00:16:53.914 "dhgroup": "ffdhe3072" 00:16:53.914 } 00:16:53.914 } 00:16:53.914 ]' 00:16:53.914 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:53.914 22:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:53.914 22:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:53.914 22:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:53.914 22:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:53.914 22:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.914 22:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.914 22:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.172 22:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MzljOGEyM2ZmODkxMWFhNmExYjk5ZGM1YjM1YTEyNzJiZjMxOTMwZTc0YzUxNTA0NTZlNDJmMzc5ZWE5NDQ2N4vraJk=: 00:16:54.735 22:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.735 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.735 22:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:54.735 22:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.735 22:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.736 22:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.736 22:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:54.736 22:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:54.736 22:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:54.736 22:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:54.994 22:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:16:54.994 22:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:54.994 22:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:54.994 22:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:54.994 22:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:54.994 22:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.994 22:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.994 22:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.994 22:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.994 22:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.994 22:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.994 22:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.252 00:16:55.252 22:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:55.252 22:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:55.252 22:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.252 22:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.252 22:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.252 22:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.252 22:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.252 22:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.252 22:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:55.252 { 00:16:55.252 "cntlid": 25, 00:16:55.252 "qid": 0, 00:16:55.252 "state": "enabled", 00:16:55.252 "thread": "nvmf_tgt_poll_group_000", 00:16:55.252 "listen_address": { 00:16:55.252 "trtype": "TCP", 00:16:55.252 "adrfam": "IPv4", 00:16:55.252 "traddr": "10.0.0.2", 00:16:55.252 "trsvcid": "4420" 00:16:55.252 }, 00:16:55.252 "peer_address": { 00:16:55.252 "trtype": "TCP", 00:16:55.252 "adrfam": "IPv4", 00:16:55.252 "traddr": "10.0.0.1", 00:16:55.252 "trsvcid": "33482" 00:16:55.252 }, 00:16:55.252 "auth": { 00:16:55.252 "state": "completed", 00:16:55.252 "digest": "sha256", 00:16:55.252 "dhgroup": "ffdhe4096" 00:16:55.252 } 00:16:55.252 } 00:16:55.252 ]' 00:16:55.252 22:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:55.510 22:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:55.510 22:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:55.510 22:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:55.510 22:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:55.510 22:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.510 22:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.510 22:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.510 22:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NGE1NDdhNTQ5YTY5MmY2NDk1ZWIxYzY5MGMwMmU4M2YzMjQ4NWFlNjQzMzhmMzZi+WYYww==: --dhchap-ctrl-secret DHHC-1:03:ZjBjY2U2ZjgzZDA4NTE5MWJjY2JkYjhiOTE2ZWZjZmFhOGViOTI3YWUyZGMxZTliNTUxNjE0ZDE5YTFkZGNhNOKYb+M=: 00:16:56.075 22:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.076 22:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:56.076 22:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.076 22:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.076 22:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.076 22:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:56.076 22:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:56.076 22:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:56.334 22:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:16:56.334 22:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:56.334 22:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:56.334 22:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:56.334 22:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:56.334 22:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.334 22:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.334 22:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.334 22:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.334 22:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.334 22:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.334 22:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.597 00:16:56.597 22:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:56.597 22:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:56.597 22:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.856 22:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.856 22:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.856 22:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.856 22:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.856 22:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.856 22:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:56.856 { 00:16:56.856 "cntlid": 27, 00:16:56.856 "qid": 0, 00:16:56.856 "state": "enabled", 00:16:56.856 "thread": "nvmf_tgt_poll_group_000", 00:16:56.856 "listen_address": { 00:16:56.856 "trtype": "TCP", 00:16:56.856 "adrfam": "IPv4", 00:16:56.856 "traddr": "10.0.0.2", 00:16:56.856 "trsvcid": "4420" 00:16:56.856 }, 00:16:56.856 "peer_address": { 00:16:56.856 "trtype": "TCP", 00:16:56.856 "adrfam": "IPv4", 00:16:56.856 "traddr": "10.0.0.1", 00:16:56.856 "trsvcid": "33516" 00:16:56.856 }, 00:16:56.856 "auth": { 00:16:56.856 "state": "completed", 00:16:56.856 "digest": "sha256", 00:16:56.856 "dhgroup": "ffdhe4096" 00:16:56.856 } 00:16:56.856 } 00:16:56.856 ]' 00:16:56.856 22:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:56.856 22:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:56.856 22:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:56.856 22:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:56.856 22:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:56.856 22:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.856 22:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.856 22:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.113 22:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YjE4MmQ4OThlNDFjZTNmNzY4OTEwMzE4YTk5YmMyNmaABYDs: --dhchap-ctrl-secret DHHC-1:02:YWVkYjhjZDA3Y2RiZTkyZDllYmIwMThlOGVkYTAwMzMxN2FjYWU4ZGM0OTNlZGQzFo0H9Q==: 00:16:57.679 22:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.679 22:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:57.679 22:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.679 22:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.679 22:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.679 22:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:57.679 22:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:57.679 22:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:57.679 22:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:16:57.679 22:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:57.679 22:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:57.679 22:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:57.679 22:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:57.679 22:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.679 22:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.679 22:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.679 22:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.679 22:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.679 22:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.679 22:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.938 00:16:57.938 22:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:57.938 22:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.938 22:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:58.196 22:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.196 22:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.196 22:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.196 22:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.196 22:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.196 22:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:58.196 { 00:16:58.196 "cntlid": 29, 00:16:58.196 "qid": 0, 00:16:58.196 "state": "enabled", 00:16:58.196 "thread": "nvmf_tgt_poll_group_000", 00:16:58.196 "listen_address": { 00:16:58.196 "trtype": "TCP", 00:16:58.196 "adrfam": "IPv4", 00:16:58.196 "traddr": "10.0.0.2", 00:16:58.196 "trsvcid": "4420" 00:16:58.196 }, 00:16:58.196 "peer_address": { 00:16:58.196 "trtype": "TCP", 00:16:58.196 "adrfam": "IPv4", 00:16:58.196 "traddr": "10.0.0.1", 00:16:58.196 "trsvcid": "48456" 00:16:58.196 }, 00:16:58.196 "auth": { 00:16:58.196 "state": "completed", 00:16:58.196 "digest": "sha256", 00:16:58.196 "dhgroup": "ffdhe4096" 00:16:58.196 } 00:16:58.196 } 00:16:58.196 ]' 00:16:58.196 22:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:58.196 22:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:58.196 22:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:58.196 22:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:58.196 22:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:58.196 22:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.196 22:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.196 22:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.455 22:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmFmYzFhNDE4NjJmZDEzYTEwYjlmYjAyMDBiZWZhMDk5MzA5YmQ0NDE5NzI5N2M2vB0mHA==: --dhchap-ctrl-secret DHHC-1:01:YzdiYjU2M2QyOGNjOTQ2ZjEzOWQzNDYxMDlkNmUyZmEIwIts: 00:16:59.020 22:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.020 22:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:59.020 22:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.021 22:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.021 22:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.021 22:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:59.021 22:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:59.021 22:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:59.278 22:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:16:59.278 22:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:59.279 22:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:59.279 22:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:59.279 22:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:59.279 22:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.279 22:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:16:59.279 22:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.279 22:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.279 22:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.279 22:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:59.279 22:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:59.537 00:16:59.537 22:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:59.537 22:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:59.537 22:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.537 22:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.537 22:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.537 22:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.537 22:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.537 22:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.537 22:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:59.537 { 00:16:59.537 "cntlid": 31, 00:16:59.537 "qid": 0, 00:16:59.537 "state": "enabled", 00:16:59.537 "thread": "nvmf_tgt_poll_group_000", 00:16:59.537 "listen_address": { 00:16:59.537 "trtype": "TCP", 00:16:59.537 "adrfam": "IPv4", 00:16:59.537 "traddr": "10.0.0.2", 00:16:59.537 "trsvcid": "4420" 00:16:59.537 }, 00:16:59.537 "peer_address": { 00:16:59.537 "trtype": "TCP", 00:16:59.537 "adrfam": "IPv4", 00:16:59.537 "traddr": "10.0.0.1", 00:16:59.537 "trsvcid": "48496" 00:16:59.537 }, 00:16:59.537 "auth": { 00:16:59.537 "state": "completed", 00:16:59.537 "digest": "sha256", 00:16:59.537 "dhgroup": "ffdhe4096" 00:16:59.537 } 00:16:59.537 } 00:16:59.537 ]' 00:16:59.537 22:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:59.795 22:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:59.795 22:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:59.795 22:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:59.795 22:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:59.795 22:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.795 22:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.795 22:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.053 22:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MzljOGEyM2ZmODkxMWFhNmExYjk5ZGM1YjM1YTEyNzJiZjMxOTMwZTc0YzUxNTA0NTZlNDJmMzc5ZWE5NDQ2N4vraJk=: 00:17:00.621 22:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.621 22:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:00.621 22:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.621 22:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.621 22:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.621 22:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:00.621 22:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:00.621 22:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:00.621 22:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:00.621 22:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:17:00.621 22:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:00.621 22:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:00.621 22:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:00.621 22:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:00.621 22:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.621 22:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.621 22:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.621 22:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.621 22:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.621 22:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.621 22:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.879 00:17:01.137 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:01.137 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.137 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:01.137 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.137 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.137 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.137 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.137 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.138 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:01.138 { 00:17:01.138 "cntlid": 33, 00:17:01.138 "qid": 0, 00:17:01.138 "state": "enabled", 00:17:01.138 "thread": "nvmf_tgt_poll_group_000", 00:17:01.138 "listen_address": { 00:17:01.138 "trtype": "TCP", 00:17:01.138 "adrfam": "IPv4", 00:17:01.138 "traddr": "10.0.0.2", 00:17:01.138 "trsvcid": "4420" 00:17:01.138 }, 00:17:01.138 "peer_address": { 00:17:01.138 "trtype": "TCP", 00:17:01.138 "adrfam": "IPv4", 00:17:01.138 "traddr": "10.0.0.1", 00:17:01.138 "trsvcid": "48528" 00:17:01.138 }, 00:17:01.138 "auth": { 00:17:01.138 "state": "completed", 00:17:01.138 "digest": "sha256", 00:17:01.138 "dhgroup": "ffdhe6144" 00:17:01.138 } 00:17:01.138 } 00:17:01.138 ]' 00:17:01.138 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:01.138 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:01.395 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:01.395 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:01.395 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:01.395 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.395 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.395 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.653 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NGE1NDdhNTQ5YTY5MmY2NDk1ZWIxYzY5MGMwMmU4M2YzMjQ4NWFlNjQzMzhmMzZi+WYYww==: --dhchap-ctrl-secret DHHC-1:03:ZjBjY2U2ZjgzZDA4NTE5MWJjY2JkYjhiOTE2ZWZjZmFhOGViOTI3YWUyZGMxZTliNTUxNjE0ZDE5YTFkZGNhNOKYb+M=: 00:17:02.218 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.218 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:02.218 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.218 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.218 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.218 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:02.218 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:02.218 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:02.218 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:17:02.218 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:02.218 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:02.218 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:02.218 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:02.218 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.218 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.218 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.219 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.219 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.219 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.219 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.476 00:17:02.734 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:02.734 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:02.734 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.734 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.734 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.734 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.734 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.734 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.734 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:02.734 { 00:17:02.734 "cntlid": 35, 00:17:02.734 "qid": 0, 00:17:02.734 "state": "enabled", 00:17:02.734 "thread": "nvmf_tgt_poll_group_000", 00:17:02.734 "listen_address": { 00:17:02.734 "trtype": "TCP", 00:17:02.734 "adrfam": "IPv4", 00:17:02.734 "traddr": "10.0.0.2", 00:17:02.734 "trsvcid": "4420" 00:17:02.734 }, 00:17:02.734 "peer_address": { 00:17:02.734 "trtype": "TCP", 00:17:02.734 "adrfam": "IPv4", 00:17:02.734 "traddr": "10.0.0.1", 00:17:02.734 "trsvcid": "48552" 00:17:02.734 }, 00:17:02.734 "auth": { 00:17:02.734 "state": "completed", 00:17:02.734 "digest": "sha256", 00:17:02.734 "dhgroup": "ffdhe6144" 00:17:02.734 } 00:17:02.734 } 00:17:02.734 ]' 00:17:02.734 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:02.734 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:02.992 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:02.992 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:02.992 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:02.992 22:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.992 22:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.992 22:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.992 22:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YjE4MmQ4OThlNDFjZTNmNzY4OTEwMzE4YTk5YmMyNmaABYDs: --dhchap-ctrl-secret DHHC-1:02:YWVkYjhjZDA3Y2RiZTkyZDllYmIwMThlOGVkYTAwMzMxN2FjYWU4ZGM0OTNlZGQzFo0H9Q==: 00:17:03.558 22:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.558 22:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:03.558 22:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.558 22:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.558 22:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.558 22:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:03.558 22:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:03.558 22:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:03.816 22:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:17:03.816 22:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:03.816 22:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:03.816 22:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:03.816 22:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:03.816 22:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.816 22:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.816 22:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.816 22:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.816 22:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.816 22:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.816 22:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.074 00:17:04.074 22:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:04.074 22:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:04.074 22:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.343 22:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.343 22:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.343 22:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.343 22:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.343 22:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.343 22:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:04.343 { 00:17:04.343 "cntlid": 37, 00:17:04.343 "qid": 0, 00:17:04.343 "state": "enabled", 00:17:04.343 "thread": "nvmf_tgt_poll_group_000", 00:17:04.343 "listen_address": { 00:17:04.343 "trtype": "TCP", 00:17:04.343 "adrfam": "IPv4", 00:17:04.343 "traddr": "10.0.0.2", 00:17:04.343 "trsvcid": "4420" 00:17:04.343 }, 00:17:04.343 "peer_address": { 00:17:04.343 "trtype": "TCP", 00:17:04.343 "adrfam": "IPv4", 00:17:04.343 "traddr": "10.0.0.1", 00:17:04.343 "trsvcid": "48582" 00:17:04.343 }, 00:17:04.343 "auth": { 00:17:04.343 "state": "completed", 00:17:04.343 "digest": "sha256", 00:17:04.343 "dhgroup": "ffdhe6144" 00:17:04.343 } 00:17:04.343 } 00:17:04.343 ]' 00:17:04.343 22:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:04.343 22:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:04.343 22:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:04.343 22:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:04.343 22:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:04.602 22:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.602 22:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.602 22:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.602 22:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmFmYzFhNDE4NjJmZDEzYTEwYjlmYjAyMDBiZWZhMDk5MzA5YmQ0NDE5NzI5N2M2vB0mHA==: --dhchap-ctrl-secret DHHC-1:01:YzdiYjU2M2QyOGNjOTQ2ZjEzOWQzNDYxMDlkNmUyZmEIwIts: 00:17:05.167 22:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.167 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.167 22:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:05.167 22:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.167 22:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.167 22:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.167 22:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:05.167 22:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:05.167 22:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:05.425 22:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:17:05.425 22:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:05.425 22:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:05.425 22:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:05.425 22:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:05.425 22:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.425 22:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:17:05.425 22:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.425 22:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.425 22:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.425 22:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:05.425 22:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:05.683 00:17:05.683 22:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:05.683 22:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:05.683 22:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.941 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.941 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.941 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.941 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.941 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.941 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:05.941 { 00:17:05.941 "cntlid": 39, 00:17:05.941 "qid": 0, 00:17:05.941 "state": "enabled", 00:17:05.941 "thread": "nvmf_tgt_poll_group_000", 00:17:05.941 "listen_address": { 00:17:05.941 "trtype": "TCP", 00:17:05.941 "adrfam": "IPv4", 00:17:05.941 "traddr": "10.0.0.2", 00:17:05.941 "trsvcid": "4420" 00:17:05.941 }, 00:17:05.941 "peer_address": { 00:17:05.941 "trtype": "TCP", 00:17:05.941 "adrfam": "IPv4", 00:17:05.941 "traddr": "10.0.0.1", 00:17:05.941 "trsvcid": "48602" 00:17:05.941 }, 00:17:05.941 "auth": { 00:17:05.941 "state": "completed", 00:17:05.941 "digest": "sha256", 00:17:05.941 "dhgroup": "ffdhe6144" 00:17:05.941 } 00:17:05.941 } 00:17:05.941 ]' 00:17:05.941 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:05.941 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:05.941 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:05.941 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:05.941 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:05.941 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.941 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.199 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.199 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MzljOGEyM2ZmODkxMWFhNmExYjk5ZGM1YjM1YTEyNzJiZjMxOTMwZTc0YzUxNTA0NTZlNDJmMzc5ZWE5NDQ2N4vraJk=: 00:17:06.765 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.765 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:06.765 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.765 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.765 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.765 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:06.765 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:06.765 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:06.765 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:07.023 22:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:17:07.023 22:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:07.023 22:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:07.023 22:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:07.023 22:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:07.023 22:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.023 22:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.023 22:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.023 22:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.023 22:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.023 22:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.023 22:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.588 00:17:07.588 22:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:07.588 22:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.588 22:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:07.588 22:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.589 22:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.589 22:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.589 22:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.589 22:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.589 22:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:07.589 { 00:17:07.589 "cntlid": 41, 00:17:07.589 "qid": 0, 00:17:07.589 "state": "enabled", 00:17:07.589 "thread": "nvmf_tgt_poll_group_000", 00:17:07.589 "listen_address": { 00:17:07.589 "trtype": "TCP", 00:17:07.589 "adrfam": "IPv4", 00:17:07.589 "traddr": "10.0.0.2", 00:17:07.589 "trsvcid": "4420" 00:17:07.589 }, 00:17:07.589 "peer_address": { 00:17:07.589 "trtype": "TCP", 00:17:07.589 "adrfam": "IPv4", 00:17:07.589 "traddr": "10.0.0.1", 00:17:07.589 "trsvcid": "37452" 00:17:07.589 }, 00:17:07.589 "auth": { 00:17:07.589 "state": "completed", 00:17:07.589 "digest": "sha256", 00:17:07.589 "dhgroup": "ffdhe8192" 00:17:07.589 } 00:17:07.589 } 00:17:07.589 ]' 00:17:07.589 22:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:07.589 22:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:07.589 22:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:07.846 22:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:07.847 22:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:07.847 22:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.847 22:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.847 22:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.847 22:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NGE1NDdhNTQ5YTY5MmY2NDk1ZWIxYzY5MGMwMmU4M2YzMjQ4NWFlNjQzMzhmMzZi+WYYww==: --dhchap-ctrl-secret DHHC-1:03:ZjBjY2U2ZjgzZDA4NTE5MWJjY2JkYjhiOTE2ZWZjZmFhOGViOTI3YWUyZGMxZTliNTUxNjE0ZDE5YTFkZGNhNOKYb+M=: 00:17:08.413 22:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.413 22:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:08.413 22:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.413 22:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.413 22:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.413 22:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:08.413 22:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:08.413 22:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:08.671 22:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:17:08.671 22:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:08.671 22:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:08.671 22:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:08.671 22:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:08.671 22:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.671 22:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.671 22:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.671 22:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.671 22:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.671 22:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.671 22:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.236 00:17:09.236 22:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:09.236 22:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:09.236 22:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.236 22:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.236 22:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.236 22:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.236 22:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.494 22:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.494 22:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:09.494 { 00:17:09.494 "cntlid": 43, 00:17:09.494 "qid": 0, 00:17:09.494 "state": "enabled", 00:17:09.494 "thread": "nvmf_tgt_poll_group_000", 00:17:09.494 "listen_address": { 00:17:09.494 "trtype": "TCP", 00:17:09.494 "adrfam": "IPv4", 00:17:09.494 "traddr": "10.0.0.2", 00:17:09.494 "trsvcid": "4420" 00:17:09.494 }, 00:17:09.494 "peer_address": { 00:17:09.494 "trtype": "TCP", 00:17:09.494 "adrfam": "IPv4", 00:17:09.494 "traddr": "10.0.0.1", 00:17:09.494 "trsvcid": "37478" 00:17:09.494 }, 00:17:09.494 "auth": { 00:17:09.494 "state": "completed", 00:17:09.494 "digest": "sha256", 00:17:09.494 "dhgroup": "ffdhe8192" 00:17:09.494 } 00:17:09.494 } 00:17:09.494 ]' 00:17:09.494 22:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:09.494 22:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:09.494 22:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:09.495 22:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:09.495 22:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:09.495 22:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.495 22:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.495 22:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.752 22:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YjE4MmQ4OThlNDFjZTNmNzY4OTEwMzE4YTk5YmMyNmaABYDs: --dhchap-ctrl-secret DHHC-1:02:YWVkYjhjZDA3Y2RiZTkyZDllYmIwMThlOGVkYTAwMzMxN2FjYWU4ZGM0OTNlZGQzFo0H9Q==: 00:17:10.317 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.317 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:10.317 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.317 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.317 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.317 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:10.317 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:10.317 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:10.317 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:17:10.317 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:10.317 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:10.317 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:10.317 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:10.317 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.317 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.317 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.317 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.317 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.317 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.317 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.885 00:17:10.885 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:10.885 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:10.885 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.143 22:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.143 22:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.143 22:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.143 22:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.143 22:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.143 22:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:11.143 { 00:17:11.143 "cntlid": 45, 00:17:11.143 "qid": 0, 00:17:11.143 "state": "enabled", 00:17:11.143 "thread": "nvmf_tgt_poll_group_000", 00:17:11.143 "listen_address": { 00:17:11.143 "trtype": "TCP", 00:17:11.143 "adrfam": "IPv4", 00:17:11.143 "traddr": "10.0.0.2", 00:17:11.143 "trsvcid": "4420" 00:17:11.143 }, 00:17:11.143 "peer_address": { 00:17:11.143 "trtype": "TCP", 00:17:11.143 "adrfam": "IPv4", 00:17:11.143 "traddr": "10.0.0.1", 00:17:11.143 "trsvcid": "37506" 00:17:11.143 }, 00:17:11.143 "auth": { 00:17:11.143 "state": "completed", 00:17:11.143 "digest": "sha256", 00:17:11.143 "dhgroup": "ffdhe8192" 00:17:11.143 } 00:17:11.143 } 00:17:11.143 ]' 00:17:11.143 22:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:11.143 22:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:11.143 22:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:11.143 22:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:11.143 22:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:11.143 22:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.143 22:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.143 22:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.401 22:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmFmYzFhNDE4NjJmZDEzYTEwYjlmYjAyMDBiZWZhMDk5MzA5YmQ0NDE5NzI5N2M2vB0mHA==: --dhchap-ctrl-secret DHHC-1:01:YzdiYjU2M2QyOGNjOTQ2ZjEzOWQzNDYxMDlkNmUyZmEIwIts: 00:17:11.966 22:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.966 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.966 22:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:11.966 22:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.966 22:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.966 22:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.966 22:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:11.966 22:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:11.966 22:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:12.223 22:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:17:12.223 22:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:12.223 22:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:12.223 22:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:12.223 22:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:12.223 22:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.223 22:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:17:12.223 22:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.223 22:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.223 22:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.224 22:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:12.224 22:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:12.484 00:17:12.484 22:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:12.484 22:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:12.484 22:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.742 22:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.742 22:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.742 22:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.742 22:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.742 22:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.742 22:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:12.742 { 00:17:12.742 "cntlid": 47, 00:17:12.742 "qid": 0, 00:17:12.742 "state": "enabled", 00:17:12.742 "thread": "nvmf_tgt_poll_group_000", 00:17:12.742 "listen_address": { 00:17:12.742 "trtype": "TCP", 00:17:12.742 "adrfam": "IPv4", 00:17:12.742 "traddr": "10.0.0.2", 00:17:12.742 "trsvcid": "4420" 00:17:12.742 }, 00:17:12.742 "peer_address": { 00:17:12.742 "trtype": "TCP", 00:17:12.742 "adrfam": "IPv4", 00:17:12.742 "traddr": "10.0.0.1", 00:17:12.742 "trsvcid": "37538" 00:17:12.742 }, 00:17:12.742 "auth": { 00:17:12.742 "state": "completed", 00:17:12.742 "digest": "sha256", 00:17:12.742 "dhgroup": "ffdhe8192" 00:17:12.742 } 00:17:12.742 } 00:17:12.742 ]' 00:17:12.742 22:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:12.742 22:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:12.742 22:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:13.001 22:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:13.001 22:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:13.001 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.001 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.001 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.001 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MzljOGEyM2ZmODkxMWFhNmExYjk5ZGM1YjM1YTEyNzJiZjMxOTMwZTc0YzUxNTA0NTZlNDJmMzc5ZWE5NDQ2N4vraJk=: 00:17:13.569 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.569 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:13.569 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.569 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.569 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.569 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:13.569 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:13.569 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:13.569 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:13.569 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:13.827 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:17:13.827 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:13.827 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:13.827 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:13.827 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:13.827 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.827 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.827 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.827 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.827 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.827 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.827 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.085 00:17:14.085 22:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:14.085 22:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:14.085 22:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.343 22:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.343 22:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.343 22:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.343 22:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.343 22:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.343 22:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:14.343 { 00:17:14.343 "cntlid": 49, 00:17:14.343 "qid": 0, 00:17:14.343 "state": "enabled", 00:17:14.343 "thread": "nvmf_tgt_poll_group_000", 00:17:14.343 "listen_address": { 00:17:14.343 "trtype": "TCP", 00:17:14.343 "adrfam": "IPv4", 00:17:14.343 "traddr": "10.0.0.2", 00:17:14.343 "trsvcid": "4420" 00:17:14.343 }, 00:17:14.343 "peer_address": { 00:17:14.343 "trtype": "TCP", 00:17:14.343 "adrfam": "IPv4", 00:17:14.343 "traddr": "10.0.0.1", 00:17:14.343 "trsvcid": "37556" 00:17:14.343 }, 00:17:14.343 "auth": { 00:17:14.343 "state": "completed", 00:17:14.343 "digest": "sha384", 00:17:14.343 "dhgroup": "null" 00:17:14.343 } 00:17:14.343 } 00:17:14.343 ]' 00:17:14.343 22:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:14.343 22:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:14.343 22:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:14.343 22:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:14.343 22:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:14.343 22:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.343 22:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.343 22:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.601 22:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NGE1NDdhNTQ5YTY5MmY2NDk1ZWIxYzY5MGMwMmU4M2YzMjQ4NWFlNjQzMzhmMzZi+WYYww==: --dhchap-ctrl-secret DHHC-1:03:ZjBjY2U2ZjgzZDA4NTE5MWJjY2JkYjhiOTE2ZWZjZmFhOGViOTI3YWUyZGMxZTliNTUxNjE0ZDE5YTFkZGNhNOKYb+M=: 00:17:15.167 22:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.167 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.167 22:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:15.167 22:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.167 22:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.167 22:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.167 22:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:15.167 22:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:15.167 22:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:15.167 22:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:17:15.167 22:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:15.167 22:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:15.167 22:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:15.167 22:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:15.167 22:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.167 22:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.167 22:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.167 22:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.167 22:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.167 22:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.167 22:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.425 00:17:15.425 22:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:15.425 22:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:15.425 22:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.683 22:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.683 22:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.683 22:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.683 22:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.683 22:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.683 22:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:15.683 { 00:17:15.683 "cntlid": 51, 00:17:15.683 "qid": 0, 00:17:15.683 "state": "enabled", 00:17:15.683 "thread": "nvmf_tgt_poll_group_000", 00:17:15.683 "listen_address": { 00:17:15.683 "trtype": "TCP", 00:17:15.683 "adrfam": "IPv4", 00:17:15.683 "traddr": "10.0.0.2", 00:17:15.683 "trsvcid": "4420" 00:17:15.683 }, 00:17:15.683 "peer_address": { 00:17:15.683 "trtype": "TCP", 00:17:15.683 "adrfam": "IPv4", 00:17:15.683 "traddr": "10.0.0.1", 00:17:15.683 "trsvcid": "37578" 00:17:15.683 }, 00:17:15.683 "auth": { 00:17:15.683 "state": "completed", 00:17:15.683 "digest": "sha384", 00:17:15.683 "dhgroup": "null" 00:17:15.683 } 00:17:15.683 } 00:17:15.683 ]' 00:17:15.683 22:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:15.683 22:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:15.683 22:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:15.683 22:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:15.683 22:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:15.942 22:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.942 22:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.942 22:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.942 22:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YjE4MmQ4OThlNDFjZTNmNzY4OTEwMzE4YTk5YmMyNmaABYDs: --dhchap-ctrl-secret DHHC-1:02:YWVkYjhjZDA3Y2RiZTkyZDllYmIwMThlOGVkYTAwMzMxN2FjYWU4ZGM0OTNlZGQzFo0H9Q==: 00:17:16.508 22:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.508 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.508 22:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:16.508 22:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.508 22:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.508 22:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.508 22:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:16.508 22:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:16.508 22:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:16.767 22:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:17:16.767 22:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:16.767 22:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:16.767 22:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:16.767 22:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:16.767 22:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.767 22:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.767 22:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.767 22:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.767 22:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.767 22:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.767 22:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.026 00:17:17.026 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:17.026 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.026 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:17.026 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.026 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.026 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.026 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.284 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.284 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:17.284 { 00:17:17.284 "cntlid": 53, 00:17:17.284 "qid": 0, 00:17:17.284 "state": "enabled", 00:17:17.284 "thread": "nvmf_tgt_poll_group_000", 00:17:17.284 "listen_address": { 00:17:17.284 "trtype": "TCP", 00:17:17.284 "adrfam": "IPv4", 00:17:17.284 "traddr": "10.0.0.2", 00:17:17.284 "trsvcid": "4420" 00:17:17.284 }, 00:17:17.284 "peer_address": { 00:17:17.284 "trtype": "TCP", 00:17:17.284 "adrfam": "IPv4", 00:17:17.284 "traddr": "10.0.0.1", 00:17:17.284 "trsvcid": "56008" 00:17:17.284 }, 00:17:17.284 "auth": { 00:17:17.284 "state": "completed", 00:17:17.284 "digest": "sha384", 00:17:17.284 "dhgroup": "null" 00:17:17.284 } 00:17:17.284 } 00:17:17.284 ]' 00:17:17.284 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:17.284 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:17.284 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:17.284 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:17.284 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:17.284 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.284 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.284 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.543 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmFmYzFhNDE4NjJmZDEzYTEwYjlmYjAyMDBiZWZhMDk5MzA5YmQ0NDE5NzI5N2M2vB0mHA==: --dhchap-ctrl-secret DHHC-1:01:YzdiYjU2M2QyOGNjOTQ2ZjEzOWQzNDYxMDlkNmUyZmEIwIts: 00:17:18.110 22:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.110 22:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:18.110 22:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.110 22:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.110 22:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.110 22:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:18.110 22:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:18.110 22:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:18.110 22:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:17:18.110 22:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:18.110 22:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:18.110 22:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:18.110 22:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:18.110 22:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.110 22:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:17:18.110 22:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.110 22:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.110 22:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.111 22:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:18.111 22:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:18.369 00:17:18.370 22:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:18.370 22:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:18.370 22:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.629 22:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.629 22:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.629 22:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.629 22:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.629 22:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.629 22:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:18.629 { 00:17:18.629 "cntlid": 55, 00:17:18.629 "qid": 0, 00:17:18.629 "state": "enabled", 00:17:18.629 "thread": "nvmf_tgt_poll_group_000", 00:17:18.629 "listen_address": { 00:17:18.629 "trtype": "TCP", 00:17:18.629 "adrfam": "IPv4", 00:17:18.629 "traddr": "10.0.0.2", 00:17:18.629 "trsvcid": "4420" 00:17:18.629 }, 00:17:18.629 "peer_address": { 00:17:18.629 "trtype": "TCP", 00:17:18.629 "adrfam": "IPv4", 00:17:18.629 "traddr": "10.0.0.1", 00:17:18.629 "trsvcid": "56038" 00:17:18.629 }, 00:17:18.629 "auth": { 00:17:18.629 "state": "completed", 00:17:18.629 "digest": "sha384", 00:17:18.629 "dhgroup": "null" 00:17:18.629 } 00:17:18.629 } 00:17:18.629 ]' 00:17:18.629 22:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:18.629 22:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:18.629 22:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:18.629 22:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:18.629 22:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:18.888 22:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.888 22:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.888 22:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.888 22:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MzljOGEyM2ZmODkxMWFhNmExYjk5ZGM1YjM1YTEyNzJiZjMxOTMwZTc0YzUxNTA0NTZlNDJmMzc5ZWE5NDQ2N4vraJk=: 00:17:19.457 22:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.457 22:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:19.457 22:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.457 22:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.457 22:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.457 22:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:19.457 22:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:19.457 22:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:19.457 22:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:19.716 22:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:17:19.716 22:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:19.716 22:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:19.716 22:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:19.716 22:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:19.716 22:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.716 22:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.716 22:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.716 22:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.716 22:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.716 22:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.716 22:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.976 00:17:19.976 22:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:19.976 22:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:19.976 22:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.976 22:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.976 22:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.976 22:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.976 22:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.235 22:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.235 22:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:20.235 { 00:17:20.235 "cntlid": 57, 00:17:20.235 "qid": 0, 00:17:20.235 "state": "enabled", 00:17:20.235 "thread": "nvmf_tgt_poll_group_000", 00:17:20.235 "listen_address": { 00:17:20.235 "trtype": "TCP", 00:17:20.235 "adrfam": "IPv4", 00:17:20.235 "traddr": "10.0.0.2", 00:17:20.235 "trsvcid": "4420" 00:17:20.235 }, 00:17:20.235 "peer_address": { 00:17:20.235 "trtype": "TCP", 00:17:20.235 "adrfam": "IPv4", 00:17:20.235 "traddr": "10.0.0.1", 00:17:20.235 "trsvcid": "56062" 00:17:20.235 }, 00:17:20.235 "auth": { 00:17:20.235 "state": "completed", 00:17:20.235 "digest": "sha384", 00:17:20.235 "dhgroup": "ffdhe2048" 00:17:20.235 } 00:17:20.235 } 00:17:20.235 ]' 00:17:20.235 22:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:20.235 22:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:20.235 22:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:20.235 22:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:20.235 22:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:20.235 22:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.235 22:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.235 22:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.494 22:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NGE1NDdhNTQ5YTY5MmY2NDk1ZWIxYzY5MGMwMmU4M2YzMjQ4NWFlNjQzMzhmMzZi+WYYww==: --dhchap-ctrl-secret DHHC-1:03:ZjBjY2U2ZjgzZDA4NTE5MWJjY2JkYjhiOTE2ZWZjZmFhOGViOTI3YWUyZGMxZTliNTUxNjE0ZDE5YTFkZGNhNOKYb+M=: 00:17:21.062 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.062 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.062 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:21.062 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.062 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.062 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.062 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:21.062 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:21.062 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:21.062 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:17:21.062 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:21.062 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:21.062 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:21.062 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:21.062 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.062 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.062 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.062 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.062 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.062 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.062 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.321 00:17:21.321 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:21.321 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:21.321 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.581 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.581 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.581 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.581 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.581 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.581 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:21.581 { 00:17:21.581 "cntlid": 59, 00:17:21.581 "qid": 0, 00:17:21.581 "state": "enabled", 00:17:21.581 "thread": "nvmf_tgt_poll_group_000", 00:17:21.581 "listen_address": { 00:17:21.581 "trtype": "TCP", 00:17:21.581 "adrfam": "IPv4", 00:17:21.581 "traddr": "10.0.0.2", 00:17:21.581 "trsvcid": "4420" 00:17:21.581 }, 00:17:21.581 "peer_address": { 00:17:21.581 "trtype": "TCP", 00:17:21.581 "adrfam": "IPv4", 00:17:21.581 "traddr": "10.0.0.1", 00:17:21.582 "trsvcid": "56084" 00:17:21.582 }, 00:17:21.582 "auth": { 00:17:21.582 "state": "completed", 00:17:21.582 "digest": "sha384", 00:17:21.582 "dhgroup": "ffdhe2048" 00:17:21.582 } 00:17:21.582 } 00:17:21.582 ]' 00:17:21.582 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:21.582 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:21.582 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:21.582 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:21.582 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:21.840 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.840 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.840 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.840 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YjE4MmQ4OThlNDFjZTNmNzY4OTEwMzE4YTk5YmMyNmaABYDs: --dhchap-ctrl-secret DHHC-1:02:YWVkYjhjZDA3Y2RiZTkyZDllYmIwMThlOGVkYTAwMzMxN2FjYWU4ZGM0OTNlZGQzFo0H9Q==: 00:17:22.409 22:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.409 22:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:22.409 22:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.409 22:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.409 22:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.409 22:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:22.409 22:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:22.409 22:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:22.669 22:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:17:22.669 22:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:22.669 22:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:22.669 22:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:22.669 22:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:22.669 22:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.669 22:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.669 22:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.669 22:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.669 22:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.669 22:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.669 22:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.929 00:17:22.929 22:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:22.929 22:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:22.929 22:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.929 22:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.929 22:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.929 22:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.929 22:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.188 22:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.188 22:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:23.188 { 00:17:23.188 "cntlid": 61, 00:17:23.188 "qid": 0, 00:17:23.188 "state": "enabled", 00:17:23.188 "thread": "nvmf_tgt_poll_group_000", 00:17:23.188 "listen_address": { 00:17:23.188 "trtype": "TCP", 00:17:23.188 "adrfam": "IPv4", 00:17:23.188 "traddr": "10.0.0.2", 00:17:23.188 "trsvcid": "4420" 00:17:23.188 }, 00:17:23.188 "peer_address": { 00:17:23.188 "trtype": "TCP", 00:17:23.188 "adrfam": "IPv4", 00:17:23.188 "traddr": "10.0.0.1", 00:17:23.188 "trsvcid": "56118" 00:17:23.188 }, 00:17:23.188 "auth": { 00:17:23.188 "state": "completed", 00:17:23.188 "digest": "sha384", 00:17:23.188 "dhgroup": "ffdhe2048" 00:17:23.188 } 00:17:23.188 } 00:17:23.188 ]' 00:17:23.188 22:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:23.188 22:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:23.188 22:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:23.188 22:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:23.188 22:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:23.188 22:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.188 22:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.188 22:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.448 22:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmFmYzFhNDE4NjJmZDEzYTEwYjlmYjAyMDBiZWZhMDk5MzA5YmQ0NDE5NzI5N2M2vB0mHA==: --dhchap-ctrl-secret DHHC-1:01:YzdiYjU2M2QyOGNjOTQ2ZjEzOWQzNDYxMDlkNmUyZmEIwIts: 00:17:24.016 22:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.016 22:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:24.016 22:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.016 22:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.016 22:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.016 22:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:24.017 22:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:24.017 22:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:24.017 22:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:17:24.017 22:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:24.017 22:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:24.017 22:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:24.017 22:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:24.017 22:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.017 22:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:17:24.017 22:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.017 22:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.017 22:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.017 22:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:24.017 22:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:24.324 00:17:24.324 22:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:24.324 22:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:24.324 22:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.589 22:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.589 22:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.589 22:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.589 22:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.589 22:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.589 22:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:24.589 { 00:17:24.589 "cntlid": 63, 00:17:24.589 "qid": 0, 00:17:24.589 "state": "enabled", 00:17:24.589 "thread": "nvmf_tgt_poll_group_000", 00:17:24.589 "listen_address": { 00:17:24.589 "trtype": "TCP", 00:17:24.589 "adrfam": "IPv4", 00:17:24.589 "traddr": "10.0.0.2", 00:17:24.589 "trsvcid": "4420" 00:17:24.589 }, 00:17:24.589 "peer_address": { 00:17:24.589 "trtype": "TCP", 00:17:24.589 "adrfam": "IPv4", 00:17:24.589 "traddr": "10.0.0.1", 00:17:24.589 "trsvcid": "56152" 00:17:24.589 }, 00:17:24.589 "auth": { 00:17:24.589 "state": "completed", 00:17:24.589 "digest": "sha384", 00:17:24.589 "dhgroup": "ffdhe2048" 00:17:24.589 } 00:17:24.589 } 00:17:24.589 ]' 00:17:24.589 22:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:24.589 22:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:24.589 22:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:24.589 22:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:24.589 22:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:24.589 22:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.589 22:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.589 22:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.848 22:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MzljOGEyM2ZmODkxMWFhNmExYjk5ZGM1YjM1YTEyNzJiZjMxOTMwZTc0YzUxNTA0NTZlNDJmMzc5ZWE5NDQ2N4vraJk=: 00:17:25.417 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.417 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.417 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:25.417 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.417 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.417 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.417 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:25.417 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:25.417 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:25.417 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:25.676 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:17:25.676 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:25.676 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:25.676 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:25.676 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:25.676 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.676 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.676 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.676 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.676 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.676 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.676 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.935 00:17:25.935 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:25.935 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:25.935 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.935 22:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.935 22:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.935 22:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.935 22:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.935 22:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.935 22:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:25.935 { 00:17:25.935 "cntlid": 65, 00:17:25.935 "qid": 0, 00:17:25.935 "state": "enabled", 00:17:25.935 "thread": "nvmf_tgt_poll_group_000", 00:17:25.935 "listen_address": { 00:17:25.935 "trtype": "TCP", 00:17:25.935 "adrfam": "IPv4", 00:17:25.935 "traddr": "10.0.0.2", 00:17:25.935 "trsvcid": "4420" 00:17:25.935 }, 00:17:25.935 "peer_address": { 00:17:25.935 "trtype": "TCP", 00:17:25.935 "adrfam": "IPv4", 00:17:25.935 "traddr": "10.0.0.1", 00:17:25.935 "trsvcid": "56180" 00:17:25.935 }, 00:17:25.935 "auth": { 00:17:25.935 "state": "completed", 00:17:25.935 "digest": "sha384", 00:17:25.935 "dhgroup": "ffdhe3072" 00:17:25.935 } 00:17:25.935 } 00:17:25.935 ]' 00:17:25.935 22:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:25.935 22:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:25.935 22:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:26.195 22:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:26.195 22:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:26.195 22:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.195 22:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.195 22:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.195 22:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NGE1NDdhNTQ5YTY5MmY2NDk1ZWIxYzY5MGMwMmU4M2YzMjQ4NWFlNjQzMzhmMzZi+WYYww==: --dhchap-ctrl-secret DHHC-1:03:ZjBjY2U2ZjgzZDA4NTE5MWJjY2JkYjhiOTE2ZWZjZmFhOGViOTI3YWUyZGMxZTliNTUxNjE0ZDE5YTFkZGNhNOKYb+M=: 00:17:26.762 22:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.762 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.762 22:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:26.762 22:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.762 22:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.762 22:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.762 22:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:26.762 22:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:26.762 22:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:27.021 22:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:17:27.021 22:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:27.021 22:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:27.021 22:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:27.021 22:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:27.021 22:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.021 22:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.021 22:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.021 22:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.021 22:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.021 22:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.021 22:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.281 00:17:27.281 22:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:27.281 22:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:27.281 22:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.540 22:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.540 22:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.540 22:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.540 22:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.540 22:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.540 22:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:27.540 { 00:17:27.540 "cntlid": 67, 00:17:27.540 "qid": 0, 00:17:27.540 "state": "enabled", 00:17:27.540 "thread": "nvmf_tgt_poll_group_000", 00:17:27.540 "listen_address": { 00:17:27.540 "trtype": "TCP", 00:17:27.540 "adrfam": "IPv4", 00:17:27.540 "traddr": "10.0.0.2", 00:17:27.540 "trsvcid": "4420" 00:17:27.540 }, 00:17:27.540 "peer_address": { 00:17:27.540 "trtype": "TCP", 00:17:27.540 "adrfam": "IPv4", 00:17:27.540 "traddr": "10.0.0.1", 00:17:27.540 "trsvcid": "37752" 00:17:27.540 }, 00:17:27.540 "auth": { 00:17:27.540 "state": "completed", 00:17:27.540 "digest": "sha384", 00:17:27.540 "dhgroup": "ffdhe3072" 00:17:27.540 } 00:17:27.540 } 00:17:27.540 ]' 00:17:27.540 22:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:27.540 22:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:27.540 22:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:27.540 22:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:27.540 22:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:27.540 22:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.540 22:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.540 22:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.799 22:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YjE4MmQ4OThlNDFjZTNmNzY4OTEwMzE4YTk5YmMyNmaABYDs: --dhchap-ctrl-secret DHHC-1:02:YWVkYjhjZDA3Y2RiZTkyZDllYmIwMThlOGVkYTAwMzMxN2FjYWU4ZGM0OTNlZGQzFo0H9Q==: 00:17:28.369 22:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.369 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.369 22:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:28.369 22:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.369 22:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.369 22:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.369 22:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:28.369 22:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:28.369 22:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:28.369 22:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:17:28.369 22:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:28.369 22:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:28.369 22:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:28.369 22:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:28.369 22:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.369 22:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.369 22:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.369 22:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.369 22:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.369 22:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.369 22:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.629 00:17:28.629 22:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:28.629 22:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:28.629 22:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.889 22:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.889 22:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.889 22:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.889 22:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.889 22:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.889 22:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:28.889 { 00:17:28.889 "cntlid": 69, 00:17:28.889 "qid": 0, 00:17:28.889 "state": "enabled", 00:17:28.889 "thread": "nvmf_tgt_poll_group_000", 00:17:28.889 "listen_address": { 00:17:28.889 "trtype": "TCP", 00:17:28.889 "adrfam": "IPv4", 00:17:28.889 "traddr": "10.0.0.2", 00:17:28.889 "trsvcid": "4420" 00:17:28.889 }, 00:17:28.889 "peer_address": { 00:17:28.889 "trtype": "TCP", 00:17:28.889 "adrfam": "IPv4", 00:17:28.889 "traddr": "10.0.0.1", 00:17:28.889 "trsvcid": "37768" 00:17:28.889 }, 00:17:28.889 "auth": { 00:17:28.889 "state": "completed", 00:17:28.889 "digest": "sha384", 00:17:28.889 "dhgroup": "ffdhe3072" 00:17:28.889 } 00:17:28.889 } 00:17:28.889 ]' 00:17:28.889 22:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:28.889 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:28.889 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:28.889 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:28.889 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:29.148 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.148 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.148 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.148 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmFmYzFhNDE4NjJmZDEzYTEwYjlmYjAyMDBiZWZhMDk5MzA5YmQ0NDE5NzI5N2M2vB0mHA==: --dhchap-ctrl-secret DHHC-1:01:YzdiYjU2M2QyOGNjOTQ2ZjEzOWQzNDYxMDlkNmUyZmEIwIts: 00:17:29.717 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.717 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:29.717 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.717 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.717 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.717 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:29.717 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:29.717 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:29.977 22:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:17:29.977 22:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:29.977 22:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:29.977 22:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:29.977 22:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:29.977 22:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.977 22:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:17:29.977 22:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.977 22:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.977 22:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.977 22:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:29.977 22:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:30.237 00:17:30.237 22:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:30.237 22:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:30.237 22:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.237 22:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.237 22:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.237 22:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.237 22:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.497 22:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.497 22:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:30.497 { 00:17:30.497 "cntlid": 71, 00:17:30.497 "qid": 0, 00:17:30.497 "state": "enabled", 00:17:30.497 "thread": "nvmf_tgt_poll_group_000", 00:17:30.497 "listen_address": { 00:17:30.497 "trtype": "TCP", 00:17:30.497 "adrfam": "IPv4", 00:17:30.497 "traddr": "10.0.0.2", 00:17:30.497 "trsvcid": "4420" 00:17:30.497 }, 00:17:30.497 "peer_address": { 00:17:30.497 "trtype": "TCP", 00:17:30.497 "adrfam": "IPv4", 00:17:30.497 "traddr": "10.0.0.1", 00:17:30.497 "trsvcid": "37796" 00:17:30.497 }, 00:17:30.497 "auth": { 00:17:30.497 "state": "completed", 00:17:30.497 "digest": "sha384", 00:17:30.497 "dhgroup": "ffdhe3072" 00:17:30.497 } 00:17:30.497 } 00:17:30.497 ]' 00:17:30.497 22:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:30.497 22:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:30.497 22:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:30.497 22:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:30.497 22:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:30.497 22:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.497 22:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.497 22:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.757 22:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MzljOGEyM2ZmODkxMWFhNmExYjk5ZGM1YjM1YTEyNzJiZjMxOTMwZTc0YzUxNTA0NTZlNDJmMzc5ZWE5NDQ2N4vraJk=: 00:17:31.327 22:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.327 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.327 22:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:31.327 22:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.327 22:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.327 22:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.327 22:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:31.327 22:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:31.327 22:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:31.327 22:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:31.327 22:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:17:31.327 22:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:31.327 22:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:31.327 22:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:31.327 22:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:31.327 22:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.327 22:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.327 22:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.327 22:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.327 22:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.327 22:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.327 22:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.587 00:17:31.587 22:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:31.587 22:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:31.587 22:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.847 22:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.847 22:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.847 22:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.847 22:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.847 22:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.847 22:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:31.847 { 00:17:31.847 "cntlid": 73, 00:17:31.847 "qid": 0, 00:17:31.847 "state": "enabled", 00:17:31.847 "thread": "nvmf_tgt_poll_group_000", 00:17:31.847 "listen_address": { 00:17:31.847 "trtype": "TCP", 00:17:31.847 "adrfam": "IPv4", 00:17:31.847 "traddr": "10.0.0.2", 00:17:31.847 "trsvcid": "4420" 00:17:31.847 }, 00:17:31.847 "peer_address": { 00:17:31.847 "trtype": "TCP", 00:17:31.847 "adrfam": "IPv4", 00:17:31.847 "traddr": "10.0.0.1", 00:17:31.847 "trsvcid": "37838" 00:17:31.847 }, 00:17:31.847 "auth": { 00:17:31.847 "state": "completed", 00:17:31.847 "digest": "sha384", 00:17:31.847 "dhgroup": "ffdhe4096" 00:17:31.847 } 00:17:31.847 } 00:17:31.847 ]' 00:17:31.847 22:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:31.847 22:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:31.847 22:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:31.847 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:31.847 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:32.107 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.107 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.107 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.107 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NGE1NDdhNTQ5YTY5MmY2NDk1ZWIxYzY5MGMwMmU4M2YzMjQ4NWFlNjQzMzhmMzZi+WYYww==: --dhchap-ctrl-secret DHHC-1:03:ZjBjY2U2ZjgzZDA4NTE5MWJjY2JkYjhiOTE2ZWZjZmFhOGViOTI3YWUyZGMxZTliNTUxNjE0ZDE5YTFkZGNhNOKYb+M=: 00:17:32.677 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.677 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.677 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:32.677 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.677 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.677 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.677 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:32.677 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:32.677 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:32.937 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:17:32.937 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:32.937 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:32.937 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:32.937 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:32.937 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.937 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.937 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.937 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.937 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.937 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.937 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.196 00:17:33.196 22:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:33.196 22:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.196 22:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:33.456 22:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.456 22:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.456 22:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.456 22:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.456 22:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.456 22:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:33.456 { 00:17:33.456 "cntlid": 75, 00:17:33.456 "qid": 0, 00:17:33.456 "state": "enabled", 00:17:33.456 "thread": "nvmf_tgt_poll_group_000", 00:17:33.456 "listen_address": { 00:17:33.456 "trtype": "TCP", 00:17:33.456 "adrfam": "IPv4", 00:17:33.456 "traddr": "10.0.0.2", 00:17:33.456 "trsvcid": "4420" 00:17:33.456 }, 00:17:33.456 "peer_address": { 00:17:33.456 "trtype": "TCP", 00:17:33.456 "adrfam": "IPv4", 00:17:33.456 "traddr": "10.0.0.1", 00:17:33.456 "trsvcid": "37862" 00:17:33.456 }, 00:17:33.456 "auth": { 00:17:33.456 "state": "completed", 00:17:33.456 "digest": "sha384", 00:17:33.456 "dhgroup": "ffdhe4096" 00:17:33.456 } 00:17:33.456 } 00:17:33.456 ]' 00:17:33.456 22:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:33.456 22:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:33.456 22:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:33.456 22:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:33.456 22:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:33.456 22:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.457 22:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.457 22:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.716 22:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YjE4MmQ4OThlNDFjZTNmNzY4OTEwMzE4YTk5YmMyNmaABYDs: --dhchap-ctrl-secret DHHC-1:02:YWVkYjhjZDA3Y2RiZTkyZDllYmIwMThlOGVkYTAwMzMxN2FjYWU4ZGM0OTNlZGQzFo0H9Q==: 00:17:34.292 22:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.293 22:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:34.293 22:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.293 22:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.293 22:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.293 22:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:34.293 22:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:34.293 22:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:34.293 22:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:17:34.293 22:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:34.293 22:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:34.293 22:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:34.293 22:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:34.293 22:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.293 22:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.293 22:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.293 22:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.293 22:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.293 22:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.293 22:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.553 00:17:34.812 22:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:34.812 22:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:34.812 22:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.812 22:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.812 22:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.812 22:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.812 22:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.812 22:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.812 22:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:34.812 { 00:17:34.812 "cntlid": 77, 00:17:34.812 "qid": 0, 00:17:34.812 "state": "enabled", 00:17:34.812 "thread": "nvmf_tgt_poll_group_000", 00:17:34.812 "listen_address": { 00:17:34.812 "trtype": "TCP", 00:17:34.812 "adrfam": "IPv4", 00:17:34.812 "traddr": "10.0.0.2", 00:17:34.812 "trsvcid": "4420" 00:17:34.812 }, 00:17:34.812 "peer_address": { 00:17:34.812 "trtype": "TCP", 00:17:34.812 "adrfam": "IPv4", 00:17:34.812 "traddr": "10.0.0.1", 00:17:34.812 "trsvcid": "37882" 00:17:34.812 }, 00:17:34.812 "auth": { 00:17:34.812 "state": "completed", 00:17:34.812 "digest": "sha384", 00:17:34.812 "dhgroup": "ffdhe4096" 00:17:34.812 } 00:17:34.812 } 00:17:34.812 ]' 00:17:34.812 22:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:34.812 22:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:34.812 22:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:35.071 22:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:35.071 22:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:35.071 22:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.071 22:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.071 22:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.071 22:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmFmYzFhNDE4NjJmZDEzYTEwYjlmYjAyMDBiZWZhMDk5MzA5YmQ0NDE5NzI5N2M2vB0mHA==: --dhchap-ctrl-secret DHHC-1:01:YzdiYjU2M2QyOGNjOTQ2ZjEzOWQzNDYxMDlkNmUyZmEIwIts: 00:17:35.639 22:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.639 22:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:35.639 22:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.639 22:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.639 22:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.639 22:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:35.639 22:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:35.640 22:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:35.899 22:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:17:35.899 22:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:35.899 22:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:35.899 22:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:35.899 22:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:35.899 22:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.899 22:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:17:35.899 22:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.899 22:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.899 22:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.899 22:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:35.899 22:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:36.229 00:17:36.229 22:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:36.229 22:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:36.229 22:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.489 22:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.489 22:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.489 22:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.489 22:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.489 22:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.489 22:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:36.489 { 00:17:36.489 "cntlid": 79, 00:17:36.489 "qid": 0, 00:17:36.489 "state": "enabled", 00:17:36.489 "thread": "nvmf_tgt_poll_group_000", 00:17:36.489 "listen_address": { 00:17:36.489 "trtype": "TCP", 00:17:36.489 "adrfam": "IPv4", 00:17:36.489 "traddr": "10.0.0.2", 00:17:36.489 "trsvcid": "4420" 00:17:36.489 }, 00:17:36.489 "peer_address": { 00:17:36.489 "trtype": "TCP", 00:17:36.489 "adrfam": "IPv4", 00:17:36.489 "traddr": "10.0.0.1", 00:17:36.489 "trsvcid": "37914" 00:17:36.489 }, 00:17:36.489 "auth": { 00:17:36.489 "state": "completed", 00:17:36.489 "digest": "sha384", 00:17:36.489 "dhgroup": "ffdhe4096" 00:17:36.489 } 00:17:36.489 } 00:17:36.489 ]' 00:17:36.489 22:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:36.489 22:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:36.489 22:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:36.489 22:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:36.489 22:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:36.489 22:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.489 22:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.489 22:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.748 22:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MzljOGEyM2ZmODkxMWFhNmExYjk5ZGM1YjM1YTEyNzJiZjMxOTMwZTc0YzUxNTA0NTZlNDJmMzc5ZWE5NDQ2N4vraJk=: 00:17:37.317 22:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.317 22:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:37.317 22:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.317 22:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.317 22:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.317 22:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:37.317 22:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:37.317 22:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:37.317 22:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:37.317 22:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:17:37.317 22:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:37.317 22:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:37.317 22:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:37.317 22:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:37.317 22:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.317 22:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.317 22:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.317 22:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.317 22:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.317 22:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.317 22:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.576 00:17:37.836 22:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:37.836 22:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.836 22:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:37.836 22:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.836 22:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.836 22:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.836 22:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.836 22:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.836 22:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:37.836 { 00:17:37.836 "cntlid": 81, 00:17:37.836 "qid": 0, 00:17:37.836 "state": "enabled", 00:17:37.836 "thread": "nvmf_tgt_poll_group_000", 00:17:37.836 "listen_address": { 00:17:37.836 "trtype": "TCP", 00:17:37.836 "adrfam": "IPv4", 00:17:37.836 "traddr": "10.0.0.2", 00:17:37.836 "trsvcid": "4420" 00:17:37.836 }, 00:17:37.836 "peer_address": { 00:17:37.836 "trtype": "TCP", 00:17:37.836 "adrfam": "IPv4", 00:17:37.836 "traddr": "10.0.0.1", 00:17:37.836 "trsvcid": "55162" 00:17:37.836 }, 00:17:37.836 "auth": { 00:17:37.836 "state": "completed", 00:17:37.836 "digest": "sha384", 00:17:37.836 "dhgroup": "ffdhe6144" 00:17:37.836 } 00:17:37.836 } 00:17:37.836 ]' 00:17:37.836 22:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:37.836 22:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:37.836 22:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:38.095 22:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:38.095 22:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:38.095 22:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.095 22:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.096 22:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.355 22:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NGE1NDdhNTQ5YTY5MmY2NDk1ZWIxYzY5MGMwMmU4M2YzMjQ4NWFlNjQzMzhmMzZi+WYYww==: --dhchap-ctrl-secret DHHC-1:03:ZjBjY2U2ZjgzZDA4NTE5MWJjY2JkYjhiOTE2ZWZjZmFhOGViOTI3YWUyZGMxZTliNTUxNjE0ZDE5YTFkZGNhNOKYb+M=: 00:17:38.614 22:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.874 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.874 22:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:38.874 22:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.874 22:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.874 22:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.874 22:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:38.874 22:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:38.874 22:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:38.874 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:17:38.874 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:38.874 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:38.874 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:38.874 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:38.874 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.874 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.874 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.874 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.874 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.874 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.874 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.443 00:17:39.443 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:39.443 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.443 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:39.444 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.444 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.444 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.444 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.444 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.444 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:39.444 { 00:17:39.444 "cntlid": 83, 00:17:39.444 "qid": 0, 00:17:39.444 "state": "enabled", 00:17:39.444 "thread": "nvmf_tgt_poll_group_000", 00:17:39.444 "listen_address": { 00:17:39.444 "trtype": "TCP", 00:17:39.444 "adrfam": "IPv4", 00:17:39.444 "traddr": "10.0.0.2", 00:17:39.444 "trsvcid": "4420" 00:17:39.444 }, 00:17:39.444 "peer_address": { 00:17:39.444 "trtype": "TCP", 00:17:39.444 "adrfam": "IPv4", 00:17:39.444 "traddr": "10.0.0.1", 00:17:39.444 "trsvcid": "55198" 00:17:39.444 }, 00:17:39.444 "auth": { 00:17:39.444 "state": "completed", 00:17:39.444 "digest": "sha384", 00:17:39.444 "dhgroup": "ffdhe6144" 00:17:39.444 } 00:17:39.444 } 00:17:39.444 ]' 00:17:39.444 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:39.444 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:39.444 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:39.444 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:39.444 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:39.704 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.704 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.704 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.704 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YjE4MmQ4OThlNDFjZTNmNzY4OTEwMzE4YTk5YmMyNmaABYDs: --dhchap-ctrl-secret DHHC-1:02:YWVkYjhjZDA3Y2RiZTkyZDllYmIwMThlOGVkYTAwMzMxN2FjYWU4ZGM0OTNlZGQzFo0H9Q==: 00:17:40.274 22:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.274 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.274 22:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:40.274 22:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.274 22:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.274 22:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.274 22:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:40.274 22:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:40.274 22:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:40.534 22:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:17:40.534 22:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:40.534 22:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:40.534 22:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:40.534 22:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:40.534 22:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.534 22:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.534 22:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.534 22:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.534 22:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.534 22:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.534 22:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.793 00:17:40.793 22:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:40.793 22:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:40.793 22:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.053 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.053 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.053 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.053 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.053 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.053 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:41.053 { 00:17:41.053 "cntlid": 85, 00:17:41.053 "qid": 0, 00:17:41.053 "state": "enabled", 00:17:41.053 "thread": "nvmf_tgt_poll_group_000", 00:17:41.053 "listen_address": { 00:17:41.053 "trtype": "TCP", 00:17:41.053 "adrfam": "IPv4", 00:17:41.053 "traddr": "10.0.0.2", 00:17:41.053 "trsvcid": "4420" 00:17:41.053 }, 00:17:41.053 "peer_address": { 00:17:41.053 "trtype": "TCP", 00:17:41.053 "adrfam": "IPv4", 00:17:41.053 "traddr": "10.0.0.1", 00:17:41.053 "trsvcid": "55214" 00:17:41.053 }, 00:17:41.053 "auth": { 00:17:41.053 "state": "completed", 00:17:41.053 "digest": "sha384", 00:17:41.053 "dhgroup": "ffdhe6144" 00:17:41.053 } 00:17:41.053 } 00:17:41.053 ]' 00:17:41.053 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:41.053 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:41.053 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:41.053 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:41.053 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:41.053 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.053 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.053 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.314 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmFmYzFhNDE4NjJmZDEzYTEwYjlmYjAyMDBiZWZhMDk5MzA5YmQ0NDE5NzI5N2M2vB0mHA==: --dhchap-ctrl-secret DHHC-1:01:YzdiYjU2M2QyOGNjOTQ2ZjEzOWQzNDYxMDlkNmUyZmEIwIts: 00:17:41.883 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.883 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:41.883 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.883 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.883 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.883 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:41.883 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:41.883 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:42.143 22:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:17:42.143 22:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:42.143 22:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:42.143 22:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:42.143 22:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:42.143 22:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.143 22:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:17:42.143 22:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.143 22:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.143 22:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.143 22:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:42.143 22:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:42.403 00:17:42.403 22:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:42.403 22:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:42.403 22:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.663 22:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.663 22:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.663 22:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.663 22:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.663 22:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.663 22:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:42.663 { 00:17:42.663 "cntlid": 87, 00:17:42.663 "qid": 0, 00:17:42.663 "state": "enabled", 00:17:42.663 "thread": "nvmf_tgt_poll_group_000", 00:17:42.663 "listen_address": { 00:17:42.663 "trtype": "TCP", 00:17:42.663 "adrfam": "IPv4", 00:17:42.663 "traddr": "10.0.0.2", 00:17:42.663 "trsvcid": "4420" 00:17:42.663 }, 00:17:42.663 "peer_address": { 00:17:42.663 "trtype": "TCP", 00:17:42.663 "adrfam": "IPv4", 00:17:42.663 "traddr": "10.0.0.1", 00:17:42.663 "trsvcid": "55240" 00:17:42.663 }, 00:17:42.663 "auth": { 00:17:42.663 "state": "completed", 00:17:42.663 "digest": "sha384", 00:17:42.663 "dhgroup": "ffdhe6144" 00:17:42.663 } 00:17:42.663 } 00:17:42.663 ]' 00:17:42.663 22:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:42.663 22:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:42.663 22:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:42.663 22:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:42.663 22:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:42.663 22:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.663 22:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.663 22:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.923 22:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MzljOGEyM2ZmODkxMWFhNmExYjk5ZGM1YjM1YTEyNzJiZjMxOTMwZTc0YzUxNTA0NTZlNDJmMzc5ZWE5NDQ2N4vraJk=: 00:17:43.492 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.492 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.492 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:43.492 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.492 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.492 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.492 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:43.492 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:43.492 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:43.492 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:43.492 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:17:43.492 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:43.492 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:43.492 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:43.492 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:43.492 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.492 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.492 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.492 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.751 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.751 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.751 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.009 00:17:44.009 22:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:44.009 22:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.009 22:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:44.267 22:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.267 22:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.267 22:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.267 22:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.267 22:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.267 22:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:44.267 { 00:17:44.267 "cntlid": 89, 00:17:44.267 "qid": 0, 00:17:44.267 "state": "enabled", 00:17:44.267 "thread": "nvmf_tgt_poll_group_000", 00:17:44.267 "listen_address": { 00:17:44.267 "trtype": "TCP", 00:17:44.267 "adrfam": "IPv4", 00:17:44.267 "traddr": "10.0.0.2", 00:17:44.267 "trsvcid": "4420" 00:17:44.267 }, 00:17:44.267 "peer_address": { 00:17:44.267 "trtype": "TCP", 00:17:44.267 "adrfam": "IPv4", 00:17:44.267 "traddr": "10.0.0.1", 00:17:44.267 "trsvcid": "55258" 00:17:44.267 }, 00:17:44.267 "auth": { 00:17:44.267 "state": "completed", 00:17:44.267 "digest": "sha384", 00:17:44.267 "dhgroup": "ffdhe8192" 00:17:44.267 } 00:17:44.267 } 00:17:44.267 ]' 00:17:44.267 22:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:44.267 22:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:44.267 22:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:44.267 22:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:44.267 22:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:44.525 22:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.525 22:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.525 22:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.525 22:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NGE1NDdhNTQ5YTY5MmY2NDk1ZWIxYzY5MGMwMmU4M2YzMjQ4NWFlNjQzMzhmMzZi+WYYww==: --dhchap-ctrl-secret DHHC-1:03:ZjBjY2U2ZjgzZDA4NTE5MWJjY2JkYjhiOTE2ZWZjZmFhOGViOTI3YWUyZGMxZTliNTUxNjE0ZDE5YTFkZGNhNOKYb+M=: 00:17:45.092 22:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.092 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.092 22:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:45.092 22:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.092 22:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.092 22:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.092 22:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:45.092 22:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:45.092 22:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:45.352 22:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:17:45.352 22:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:45.352 22:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:45.352 22:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:45.352 22:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:45.352 22:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.352 22:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.352 22:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.352 22:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.352 22:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.352 22:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.352 22:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.921 00:17:45.921 22:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:45.921 22:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:45.921 22:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.921 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.921 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.921 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.921 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.921 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.921 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:45.921 { 00:17:45.921 "cntlid": 91, 00:17:45.921 "qid": 0, 00:17:45.921 "state": "enabled", 00:17:45.921 "thread": "nvmf_tgt_poll_group_000", 00:17:45.921 "listen_address": { 00:17:45.921 "trtype": "TCP", 00:17:45.921 "adrfam": "IPv4", 00:17:45.921 "traddr": "10.0.0.2", 00:17:45.921 "trsvcid": "4420" 00:17:45.921 }, 00:17:45.921 "peer_address": { 00:17:45.921 "trtype": "TCP", 00:17:45.921 "adrfam": "IPv4", 00:17:45.921 "traddr": "10.0.0.1", 00:17:45.921 "trsvcid": "55302" 00:17:45.921 }, 00:17:45.921 "auth": { 00:17:45.921 "state": "completed", 00:17:45.921 "digest": "sha384", 00:17:45.921 "dhgroup": "ffdhe8192" 00:17:45.921 } 00:17:45.921 } 00:17:45.921 ]' 00:17:45.921 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:46.181 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:46.181 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:46.181 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:46.181 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:46.181 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.181 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.181 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.439 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YjE4MmQ4OThlNDFjZTNmNzY4OTEwMzE4YTk5YmMyNmaABYDs: --dhchap-ctrl-secret DHHC-1:02:YWVkYjhjZDA3Y2RiZTkyZDllYmIwMThlOGVkYTAwMzMxN2FjYWU4ZGM0OTNlZGQzFo0H9Q==: 00:17:46.698 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.958 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:46.958 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.958 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.958 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.958 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:46.958 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:46.958 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:46.958 22:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:17:46.958 22:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:46.958 22:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:46.958 22:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:46.958 22:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:46.958 22:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.958 22:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.958 22:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.958 22:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.958 22:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.958 22:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.958 22:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.526 00:17:47.526 22:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:47.526 22:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:47.526 22:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.784 22:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.784 22:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.784 22:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.784 22:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.784 22:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.784 22:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:47.784 { 00:17:47.784 "cntlid": 93, 00:17:47.784 "qid": 0, 00:17:47.784 "state": "enabled", 00:17:47.784 "thread": "nvmf_tgt_poll_group_000", 00:17:47.784 "listen_address": { 00:17:47.784 "trtype": "TCP", 00:17:47.784 "adrfam": "IPv4", 00:17:47.784 "traddr": "10.0.0.2", 00:17:47.784 "trsvcid": "4420" 00:17:47.784 }, 00:17:47.784 "peer_address": { 00:17:47.784 "trtype": "TCP", 00:17:47.784 "adrfam": "IPv4", 00:17:47.784 "traddr": "10.0.0.1", 00:17:47.784 "trsvcid": "58420" 00:17:47.784 }, 00:17:47.784 "auth": { 00:17:47.784 "state": "completed", 00:17:47.784 "digest": "sha384", 00:17:47.784 "dhgroup": "ffdhe8192" 00:17:47.784 } 00:17:47.784 } 00:17:47.784 ]' 00:17:47.784 22:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:47.784 22:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:47.785 22:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:47.785 22:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:47.785 22:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:47.785 22:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.785 22:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.785 22:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.045 22:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmFmYzFhNDE4NjJmZDEzYTEwYjlmYjAyMDBiZWZhMDk5MzA5YmQ0NDE5NzI5N2M2vB0mHA==: --dhchap-ctrl-secret DHHC-1:01:YzdiYjU2M2QyOGNjOTQ2ZjEzOWQzNDYxMDlkNmUyZmEIwIts: 00:17:48.646 22:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.646 22:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:48.646 22:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.646 22:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.646 22:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.646 22:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:48.646 22:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:48.646 22:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:48.646 22:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:17:48.646 22:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:48.646 22:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:48.646 22:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:48.646 22:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:48.646 22:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.646 22:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:17:48.646 22:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.646 22:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.646 22:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.646 22:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:48.646 22:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:49.214 00:17:49.214 22:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:49.214 22:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:49.214 22:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.472 22:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.472 22:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.472 22:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.472 22:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.472 22:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.472 22:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:49.472 { 00:17:49.472 "cntlid": 95, 00:17:49.472 "qid": 0, 00:17:49.472 "state": "enabled", 00:17:49.472 "thread": "nvmf_tgt_poll_group_000", 00:17:49.472 "listen_address": { 00:17:49.472 "trtype": "TCP", 00:17:49.472 "adrfam": "IPv4", 00:17:49.472 "traddr": "10.0.0.2", 00:17:49.472 "trsvcid": "4420" 00:17:49.472 }, 00:17:49.472 "peer_address": { 00:17:49.472 "trtype": "TCP", 00:17:49.472 "adrfam": "IPv4", 00:17:49.472 "traddr": "10.0.0.1", 00:17:49.472 "trsvcid": "58438" 00:17:49.472 }, 00:17:49.472 "auth": { 00:17:49.472 "state": "completed", 00:17:49.472 "digest": "sha384", 00:17:49.472 "dhgroup": "ffdhe8192" 00:17:49.472 } 00:17:49.472 } 00:17:49.472 ]' 00:17:49.472 22:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:49.472 22:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:49.472 22:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:49.472 22:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:49.472 22:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:49.472 22:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.472 22:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.472 22:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.730 22:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MzljOGEyM2ZmODkxMWFhNmExYjk5ZGM1YjM1YTEyNzJiZjMxOTMwZTc0YzUxNTA0NTZlNDJmMzc5ZWE5NDQ2N4vraJk=: 00:17:50.299 22:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.299 22:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:50.299 22:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.299 22:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.299 22:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.299 22:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:50.299 22:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:50.299 22:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:50.299 22:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:50.299 22:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:50.299 22:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:17:50.299 22:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:50.299 22:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:50.299 22:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:50.299 22:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:50.299 22:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.299 22:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.299 22:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.299 22:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.299 22:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.299 22:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.299 22:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.559 00:17:50.559 22:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:50.559 22:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:50.559 22:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.823 22:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.823 22:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.823 22:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.823 22:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.823 22:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.823 22:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:50.823 { 00:17:50.823 "cntlid": 97, 00:17:50.823 "qid": 0, 00:17:50.823 "state": "enabled", 00:17:50.823 "thread": "nvmf_tgt_poll_group_000", 00:17:50.823 "listen_address": { 00:17:50.823 "trtype": "TCP", 00:17:50.823 "adrfam": "IPv4", 00:17:50.823 "traddr": "10.0.0.2", 00:17:50.823 "trsvcid": "4420" 00:17:50.823 }, 00:17:50.823 "peer_address": { 00:17:50.823 "trtype": "TCP", 00:17:50.823 "adrfam": "IPv4", 00:17:50.823 "traddr": "10.0.0.1", 00:17:50.823 "trsvcid": "58464" 00:17:50.823 }, 00:17:50.823 "auth": { 00:17:50.823 "state": "completed", 00:17:50.823 "digest": "sha512", 00:17:50.823 "dhgroup": "null" 00:17:50.823 } 00:17:50.823 } 00:17:50.823 ]' 00:17:50.823 22:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:50.823 22:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:50.823 22:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:50.823 22:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:50.823 22:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:51.082 22:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.082 22:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.082 22:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.082 22:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NGE1NDdhNTQ5YTY5MmY2NDk1ZWIxYzY5MGMwMmU4M2YzMjQ4NWFlNjQzMzhmMzZi+WYYww==: --dhchap-ctrl-secret DHHC-1:03:ZjBjY2U2ZjgzZDA4NTE5MWJjY2JkYjhiOTE2ZWZjZmFhOGViOTI3YWUyZGMxZTliNTUxNjE0ZDE5YTFkZGNhNOKYb+M=: 00:17:51.650 22:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.650 22:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:51.650 22:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.650 22:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.650 22:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.650 22:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:51.650 22:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:51.650 22:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:51.910 22:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:17:51.910 22:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:51.910 22:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:51.910 22:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:51.910 22:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:51.910 22:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.910 22:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.910 22:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.910 22:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.910 22:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.910 22:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.910 22:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.169 00:17:52.169 22:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:52.169 22:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:52.169 22:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.169 22:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.169 22:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.169 22:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.169 22:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.169 22:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.169 22:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:52.169 { 00:17:52.169 "cntlid": 99, 00:17:52.169 "qid": 0, 00:17:52.169 "state": "enabled", 00:17:52.169 "thread": "nvmf_tgt_poll_group_000", 00:17:52.169 "listen_address": { 00:17:52.169 "trtype": "TCP", 00:17:52.169 "adrfam": "IPv4", 00:17:52.169 "traddr": "10.0.0.2", 00:17:52.169 "trsvcid": "4420" 00:17:52.169 }, 00:17:52.169 "peer_address": { 00:17:52.169 "trtype": "TCP", 00:17:52.169 "adrfam": "IPv4", 00:17:52.169 "traddr": "10.0.0.1", 00:17:52.169 "trsvcid": "58490" 00:17:52.169 }, 00:17:52.169 "auth": { 00:17:52.170 "state": "completed", 00:17:52.170 "digest": "sha512", 00:17:52.170 "dhgroup": "null" 00:17:52.170 } 00:17:52.170 } 00:17:52.170 ]' 00:17:52.170 22:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:52.429 22:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:52.429 22:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:52.429 22:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:52.429 22:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:52.429 22:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.429 22:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.429 22:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.689 22:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YjE4MmQ4OThlNDFjZTNmNzY4OTEwMzE4YTk5YmMyNmaABYDs: --dhchap-ctrl-secret DHHC-1:02:YWVkYjhjZDA3Y2RiZTkyZDllYmIwMThlOGVkYTAwMzMxN2FjYWU4ZGM0OTNlZGQzFo0H9Q==: 00:17:53.257 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.257 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:53.257 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.257 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.257 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.257 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:53.257 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:53.257 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:53.257 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:17:53.257 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:53.257 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:53.257 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:53.258 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:53.258 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.258 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.258 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.258 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.258 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.258 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.258 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.517 00:17:53.517 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:53.517 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:53.517 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.776 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.776 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.776 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.776 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.776 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.776 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:53.776 { 00:17:53.776 "cntlid": 101, 00:17:53.776 "qid": 0, 00:17:53.776 "state": "enabled", 00:17:53.776 "thread": "nvmf_tgt_poll_group_000", 00:17:53.776 "listen_address": { 00:17:53.776 "trtype": "TCP", 00:17:53.776 "adrfam": "IPv4", 00:17:53.776 "traddr": "10.0.0.2", 00:17:53.776 "trsvcid": "4420" 00:17:53.776 }, 00:17:53.776 "peer_address": { 00:17:53.776 "trtype": "TCP", 00:17:53.776 "adrfam": "IPv4", 00:17:53.776 "traddr": "10.0.0.1", 00:17:53.776 "trsvcid": "58526" 00:17:53.776 }, 00:17:53.776 "auth": { 00:17:53.776 "state": "completed", 00:17:53.776 "digest": "sha512", 00:17:53.776 "dhgroup": "null" 00:17:53.776 } 00:17:53.776 } 00:17:53.776 ]' 00:17:53.776 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:53.776 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:53.776 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:53.776 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:53.776 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:54.035 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.035 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.035 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.035 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmFmYzFhNDE4NjJmZDEzYTEwYjlmYjAyMDBiZWZhMDk5MzA5YmQ0NDE5NzI5N2M2vB0mHA==: --dhchap-ctrl-secret DHHC-1:01:YzdiYjU2M2QyOGNjOTQ2ZjEzOWQzNDYxMDlkNmUyZmEIwIts: 00:17:54.616 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.616 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:54.616 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.616 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.616 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.616 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:54.616 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:54.616 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:54.875 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:17:54.875 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:54.875 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:54.875 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:54.875 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:54.875 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.875 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:17:54.875 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.875 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.875 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.875 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:54.875 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:55.136 00:17:55.136 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:55.136 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:55.136 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.136 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.136 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.136 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.136 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.397 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.397 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:55.397 { 00:17:55.397 "cntlid": 103, 00:17:55.397 "qid": 0, 00:17:55.397 "state": "enabled", 00:17:55.397 "thread": "nvmf_tgt_poll_group_000", 00:17:55.397 "listen_address": { 00:17:55.397 "trtype": "TCP", 00:17:55.397 "adrfam": "IPv4", 00:17:55.397 "traddr": "10.0.0.2", 00:17:55.397 "trsvcid": "4420" 00:17:55.397 }, 00:17:55.397 "peer_address": { 00:17:55.397 "trtype": "TCP", 00:17:55.397 "adrfam": "IPv4", 00:17:55.397 "traddr": "10.0.0.1", 00:17:55.397 "trsvcid": "58556" 00:17:55.397 }, 00:17:55.397 "auth": { 00:17:55.397 "state": "completed", 00:17:55.397 "digest": "sha512", 00:17:55.397 "dhgroup": "null" 00:17:55.397 } 00:17:55.397 } 00:17:55.397 ]' 00:17:55.397 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:55.397 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:55.397 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:55.397 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:55.397 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:55.397 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.397 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.397 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.657 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MzljOGEyM2ZmODkxMWFhNmExYjk5ZGM1YjM1YTEyNzJiZjMxOTMwZTc0YzUxNTA0NTZlNDJmMzc5ZWE5NDQ2N4vraJk=: 00:17:56.226 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.226 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:56.226 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.226 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.226 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.226 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:56.226 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:56.226 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:56.226 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:56.226 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:17:56.226 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:56.226 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:56.226 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:56.226 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:56.226 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.226 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.226 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.226 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.226 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.226 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.226 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.486 00:17:56.486 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:56.486 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:56.486 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.745 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.745 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.745 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.745 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.745 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.745 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:56.745 { 00:17:56.745 "cntlid": 105, 00:17:56.745 "qid": 0, 00:17:56.745 "state": "enabled", 00:17:56.745 "thread": "nvmf_tgt_poll_group_000", 00:17:56.745 "listen_address": { 00:17:56.745 "trtype": "TCP", 00:17:56.745 "adrfam": "IPv4", 00:17:56.745 "traddr": "10.0.0.2", 00:17:56.745 "trsvcid": "4420" 00:17:56.745 }, 00:17:56.745 "peer_address": { 00:17:56.745 "trtype": "TCP", 00:17:56.745 "adrfam": "IPv4", 00:17:56.745 "traddr": "10.0.0.1", 00:17:56.745 "trsvcid": "58576" 00:17:56.745 }, 00:17:56.745 "auth": { 00:17:56.745 "state": "completed", 00:17:56.745 "digest": "sha512", 00:17:56.745 "dhgroup": "ffdhe2048" 00:17:56.745 } 00:17:56.745 } 00:17:56.745 ]' 00:17:56.745 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:56.745 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:56.745 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:56.745 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:56.745 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:56.746 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.746 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.746 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.005 22:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NGE1NDdhNTQ5YTY5MmY2NDk1ZWIxYzY5MGMwMmU4M2YzMjQ4NWFlNjQzMzhmMzZi+WYYww==: --dhchap-ctrl-secret DHHC-1:03:ZjBjY2U2ZjgzZDA4NTE5MWJjY2JkYjhiOTE2ZWZjZmFhOGViOTI3YWUyZGMxZTliNTUxNjE0ZDE5YTFkZGNhNOKYb+M=: 00:17:57.574 22:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.574 22:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:57.574 22:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.574 22:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.574 22:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.574 22:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:57.574 22:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:57.574 22:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:57.833 22:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:17:57.833 22:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:57.833 22:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:57.833 22:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:57.834 22:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:57.834 22:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.834 22:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.834 22:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.834 22:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.834 22:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.834 22:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.834 22:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.094 00:17:58.094 22:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:58.094 22:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.094 22:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:58.094 22:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.094 22:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.094 22:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.094 22:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.094 22:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.094 22:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:58.094 { 00:17:58.094 "cntlid": 107, 00:17:58.094 "qid": 0, 00:17:58.094 "state": "enabled", 00:17:58.094 "thread": "nvmf_tgt_poll_group_000", 00:17:58.094 "listen_address": { 00:17:58.094 "trtype": "TCP", 00:17:58.094 "adrfam": "IPv4", 00:17:58.094 "traddr": "10.0.0.2", 00:17:58.094 "trsvcid": "4420" 00:17:58.094 }, 00:17:58.094 "peer_address": { 00:17:58.094 "trtype": "TCP", 00:17:58.094 "adrfam": "IPv4", 00:17:58.094 "traddr": "10.0.0.1", 00:17:58.094 "trsvcid": "48566" 00:17:58.094 }, 00:17:58.094 "auth": { 00:17:58.094 "state": "completed", 00:17:58.094 "digest": "sha512", 00:17:58.094 "dhgroup": "ffdhe2048" 00:17:58.094 } 00:17:58.094 } 00:17:58.094 ]' 00:17:58.094 22:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:58.094 22:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:58.094 22:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:58.353 22:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:58.353 22:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:58.353 22:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.353 22:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.353 22:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.612 22:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YjE4MmQ4OThlNDFjZTNmNzY4OTEwMzE4YTk5YmMyNmaABYDs: --dhchap-ctrl-secret DHHC-1:02:YWVkYjhjZDA3Y2RiZTkyZDllYmIwMThlOGVkYTAwMzMxN2FjYWU4ZGM0OTNlZGQzFo0H9Q==: 00:17:59.180 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.181 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:59.181 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.181 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.181 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.181 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:59.181 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:59.181 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:59.181 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:17:59.181 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:59.181 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:59.181 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:59.181 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:59.181 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.181 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.181 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.181 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.181 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.181 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.181 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.440 00:17:59.440 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:59.440 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.440 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:59.699 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.699 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.699 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.699 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.699 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.699 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:59.699 { 00:17:59.699 "cntlid": 109, 00:17:59.699 "qid": 0, 00:17:59.699 "state": "enabled", 00:17:59.699 "thread": "nvmf_tgt_poll_group_000", 00:17:59.699 "listen_address": { 00:17:59.699 "trtype": "TCP", 00:17:59.699 "adrfam": "IPv4", 00:17:59.699 "traddr": "10.0.0.2", 00:17:59.699 "trsvcid": "4420" 00:17:59.699 }, 00:17:59.699 "peer_address": { 00:17:59.699 "trtype": "TCP", 00:17:59.699 "adrfam": "IPv4", 00:17:59.699 "traddr": "10.0.0.1", 00:17:59.699 "trsvcid": "48588" 00:17:59.699 }, 00:17:59.699 "auth": { 00:17:59.699 "state": "completed", 00:17:59.699 "digest": "sha512", 00:17:59.699 "dhgroup": "ffdhe2048" 00:17:59.699 } 00:17:59.699 } 00:17:59.699 ]' 00:17:59.699 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:59.699 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:59.699 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:59.699 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:59.699 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:59.699 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.699 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.699 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.959 22:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmFmYzFhNDE4NjJmZDEzYTEwYjlmYjAyMDBiZWZhMDk5MzA5YmQ0NDE5NzI5N2M2vB0mHA==: --dhchap-ctrl-secret DHHC-1:01:YzdiYjU2M2QyOGNjOTQ2ZjEzOWQzNDYxMDlkNmUyZmEIwIts: 00:18:00.530 22:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.530 22:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:00.530 22:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.530 22:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.530 22:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.530 22:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:00.530 22:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:00.530 22:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:00.790 22:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:18:00.790 22:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:00.790 22:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:00.790 22:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:00.790 22:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:00.790 22:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.790 22:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:00.790 22:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.790 22:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.790 22:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.790 22:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:00.790 22:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:01.049 00:18:01.049 22:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:01.049 22:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:01.049 22:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.049 22:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.049 22:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.049 22:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.049 22:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.049 22:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.049 22:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:01.049 { 00:18:01.049 "cntlid": 111, 00:18:01.049 "qid": 0, 00:18:01.049 "state": "enabled", 00:18:01.049 "thread": "nvmf_tgt_poll_group_000", 00:18:01.049 "listen_address": { 00:18:01.049 "trtype": "TCP", 00:18:01.049 "adrfam": "IPv4", 00:18:01.049 "traddr": "10.0.0.2", 00:18:01.049 "trsvcid": "4420" 00:18:01.049 }, 00:18:01.049 "peer_address": { 00:18:01.049 "trtype": "TCP", 00:18:01.049 "adrfam": "IPv4", 00:18:01.049 "traddr": "10.0.0.1", 00:18:01.049 "trsvcid": "48612" 00:18:01.049 }, 00:18:01.049 "auth": { 00:18:01.049 "state": "completed", 00:18:01.049 "digest": "sha512", 00:18:01.049 "dhgroup": "ffdhe2048" 00:18:01.049 } 00:18:01.049 } 00:18:01.049 ]' 00:18:01.049 22:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:01.308 22:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:01.308 22:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:01.308 22:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:01.308 22:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:01.308 22:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.308 22:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.308 22:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.568 22:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MzljOGEyM2ZmODkxMWFhNmExYjk5ZGM1YjM1YTEyNzJiZjMxOTMwZTc0YzUxNTA0NTZlNDJmMzc5ZWE5NDQ2N4vraJk=: 00:18:02.136 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.136 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:02.136 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.136 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.136 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.136 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:02.136 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:02.136 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:02.136 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:02.136 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:18:02.136 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:02.136 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:02.136 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:02.136 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:02.136 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.136 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.136 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.136 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.136 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.136 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.136 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.395 00:18:02.395 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:02.395 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:02.395 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.654 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.654 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.654 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.654 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.654 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.654 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:02.654 { 00:18:02.654 "cntlid": 113, 00:18:02.654 "qid": 0, 00:18:02.654 "state": "enabled", 00:18:02.654 "thread": "nvmf_tgt_poll_group_000", 00:18:02.654 "listen_address": { 00:18:02.654 "trtype": "TCP", 00:18:02.654 "adrfam": "IPv4", 00:18:02.654 "traddr": "10.0.0.2", 00:18:02.654 "trsvcid": "4420" 00:18:02.654 }, 00:18:02.654 "peer_address": { 00:18:02.654 "trtype": "TCP", 00:18:02.654 "adrfam": "IPv4", 00:18:02.654 "traddr": "10.0.0.1", 00:18:02.654 "trsvcid": "48644" 00:18:02.654 }, 00:18:02.654 "auth": { 00:18:02.654 "state": "completed", 00:18:02.654 "digest": "sha512", 00:18:02.654 "dhgroup": "ffdhe3072" 00:18:02.654 } 00:18:02.654 } 00:18:02.654 ]' 00:18:02.654 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:02.654 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:02.654 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:02.654 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:02.654 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:02.913 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.913 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.913 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.913 22:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NGE1NDdhNTQ5YTY5MmY2NDk1ZWIxYzY5MGMwMmU4M2YzMjQ4NWFlNjQzMzhmMzZi+WYYww==: --dhchap-ctrl-secret DHHC-1:03:ZjBjY2U2ZjgzZDA4NTE5MWJjY2JkYjhiOTE2ZWZjZmFhOGViOTI3YWUyZGMxZTliNTUxNjE0ZDE5YTFkZGNhNOKYb+M=: 00:18:03.548 22:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.548 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.548 22:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:03.548 22:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.548 22:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.548 22:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.548 22:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:03.548 22:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:03.548 22:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:03.809 22:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:18:03.809 22:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:03.809 22:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:03.809 22:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:03.809 22:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:03.809 22:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.809 22:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.809 22:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.809 22:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.809 22:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.809 22:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.809 22:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.069 00:18:04.069 22:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:04.069 22:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:04.069 22:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.069 22:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.069 22:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.069 22:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.069 22:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.069 22:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.069 22:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:04.069 { 00:18:04.069 "cntlid": 115, 00:18:04.069 "qid": 0, 00:18:04.069 "state": "enabled", 00:18:04.069 "thread": "nvmf_tgt_poll_group_000", 00:18:04.069 "listen_address": { 00:18:04.069 "trtype": "TCP", 00:18:04.069 "adrfam": "IPv4", 00:18:04.069 "traddr": "10.0.0.2", 00:18:04.069 "trsvcid": "4420" 00:18:04.069 }, 00:18:04.069 "peer_address": { 00:18:04.069 "trtype": "TCP", 00:18:04.069 "adrfam": "IPv4", 00:18:04.069 "traddr": "10.0.0.1", 00:18:04.069 "trsvcid": "48670" 00:18:04.069 }, 00:18:04.069 "auth": { 00:18:04.069 "state": "completed", 00:18:04.069 "digest": "sha512", 00:18:04.069 "dhgroup": "ffdhe3072" 00:18:04.069 } 00:18:04.069 } 00:18:04.069 ]' 00:18:04.069 22:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:04.069 22:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:04.069 22:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:04.328 22:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:04.328 22:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:04.328 22:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.328 22:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.328 22:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.328 22:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YjE4MmQ4OThlNDFjZTNmNzY4OTEwMzE4YTk5YmMyNmaABYDs: --dhchap-ctrl-secret DHHC-1:02:YWVkYjhjZDA3Y2RiZTkyZDllYmIwMThlOGVkYTAwMzMxN2FjYWU4ZGM0OTNlZGQzFo0H9Q==: 00:18:04.898 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.898 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:04.898 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.898 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.898 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.898 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:04.898 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:04.898 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:05.158 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:18:05.158 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.158 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:05.158 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:05.158 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:05.158 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.158 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.158 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.158 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.158 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.158 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.158 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.418 00:18:05.418 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:05.418 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.418 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:05.678 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.678 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.678 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.678 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.678 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.678 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:05.678 { 00:18:05.678 "cntlid": 117, 00:18:05.678 "qid": 0, 00:18:05.678 "state": "enabled", 00:18:05.678 "thread": "nvmf_tgt_poll_group_000", 00:18:05.678 "listen_address": { 00:18:05.678 "trtype": "TCP", 00:18:05.678 "adrfam": "IPv4", 00:18:05.678 "traddr": "10.0.0.2", 00:18:05.678 "trsvcid": "4420" 00:18:05.678 }, 00:18:05.678 "peer_address": { 00:18:05.678 "trtype": "TCP", 00:18:05.678 "adrfam": "IPv4", 00:18:05.678 "traddr": "10.0.0.1", 00:18:05.678 "trsvcid": "48704" 00:18:05.678 }, 00:18:05.678 "auth": { 00:18:05.678 "state": "completed", 00:18:05.678 "digest": "sha512", 00:18:05.678 "dhgroup": "ffdhe3072" 00:18:05.678 } 00:18:05.678 } 00:18:05.678 ]' 00:18:05.678 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:05.678 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:05.678 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:05.678 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:05.678 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:05.678 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.678 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.678 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.938 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmFmYzFhNDE4NjJmZDEzYTEwYjlmYjAyMDBiZWZhMDk5MzA5YmQ0NDE5NzI5N2M2vB0mHA==: --dhchap-ctrl-secret DHHC-1:01:YzdiYjU2M2QyOGNjOTQ2ZjEzOWQzNDYxMDlkNmUyZmEIwIts: 00:18:06.509 22:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.509 22:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:06.509 22:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.509 22:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.509 22:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.509 22:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:06.509 22:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:06.509 22:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:06.509 22:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:18:06.509 22:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:06.509 22:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:06.509 22:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:06.509 22:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:06.509 22:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.509 22:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:06.509 22:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.509 22:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.769 22:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.769 22:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:06.769 22:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:06.769 00:18:07.028 22:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:07.028 22:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:07.028 22:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.028 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.028 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.028 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.028 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.028 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.029 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:07.029 { 00:18:07.029 "cntlid": 119, 00:18:07.029 "qid": 0, 00:18:07.029 "state": "enabled", 00:18:07.029 "thread": "nvmf_tgt_poll_group_000", 00:18:07.029 "listen_address": { 00:18:07.029 "trtype": "TCP", 00:18:07.029 "adrfam": "IPv4", 00:18:07.029 "traddr": "10.0.0.2", 00:18:07.029 "trsvcid": "4420" 00:18:07.029 }, 00:18:07.029 "peer_address": { 00:18:07.029 "trtype": "TCP", 00:18:07.029 "adrfam": "IPv4", 00:18:07.029 "traddr": "10.0.0.1", 00:18:07.029 "trsvcid": "55354" 00:18:07.029 }, 00:18:07.029 "auth": { 00:18:07.029 "state": "completed", 00:18:07.029 "digest": "sha512", 00:18:07.029 "dhgroup": "ffdhe3072" 00:18:07.029 } 00:18:07.029 } 00:18:07.029 ]' 00:18:07.029 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:07.029 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:07.029 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:07.288 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:07.288 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:07.288 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.288 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.288 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.288 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MzljOGEyM2ZmODkxMWFhNmExYjk5ZGM1YjM1YTEyNzJiZjMxOTMwZTc0YzUxNTA0NTZlNDJmMzc5ZWE5NDQ2N4vraJk=: 00:18:07.858 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.858 22:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:07.858 22:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.858 22:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.858 22:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.858 22:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:07.858 22:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:07.858 22:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:07.858 22:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:08.118 22:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:18:08.118 22:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:08.118 22:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:08.118 22:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:08.118 22:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:08.118 22:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.118 22:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.118 22:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.118 22:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.118 22:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.118 22:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.118 22:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.378 00:18:08.378 22:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:08.378 22:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:08.378 22:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.637 22:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.637 22:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.637 22:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.637 22:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.637 22:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.637 22:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:08.637 { 00:18:08.637 "cntlid": 121, 00:18:08.637 "qid": 0, 00:18:08.637 "state": "enabled", 00:18:08.637 "thread": "nvmf_tgt_poll_group_000", 00:18:08.637 "listen_address": { 00:18:08.637 "trtype": "TCP", 00:18:08.637 "adrfam": "IPv4", 00:18:08.637 "traddr": "10.0.0.2", 00:18:08.637 "trsvcid": "4420" 00:18:08.637 }, 00:18:08.637 "peer_address": { 00:18:08.637 "trtype": "TCP", 00:18:08.637 "adrfam": "IPv4", 00:18:08.637 "traddr": "10.0.0.1", 00:18:08.637 "trsvcid": "55368" 00:18:08.637 }, 00:18:08.637 "auth": { 00:18:08.637 "state": "completed", 00:18:08.637 "digest": "sha512", 00:18:08.637 "dhgroup": "ffdhe4096" 00:18:08.637 } 00:18:08.637 } 00:18:08.637 ]' 00:18:08.637 22:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:08.637 22:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:08.637 22:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:08.637 22:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:08.637 22:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:08.637 22:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.637 22:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.637 22:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.896 22:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NGE1NDdhNTQ5YTY5MmY2NDk1ZWIxYzY5MGMwMmU4M2YzMjQ4NWFlNjQzMzhmMzZi+WYYww==: --dhchap-ctrl-secret DHHC-1:03:ZjBjY2U2ZjgzZDA4NTE5MWJjY2JkYjhiOTE2ZWZjZmFhOGViOTI3YWUyZGMxZTliNTUxNjE0ZDE5YTFkZGNhNOKYb+M=: 00:18:09.464 22:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.464 22:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:09.464 22:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.464 22:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.464 22:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.464 22:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:09.464 22:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:09.464 22:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:09.724 22:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:18:09.724 22:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:09.724 22:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:09.724 22:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:09.724 22:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:09.724 22:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.724 22:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.724 22:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.724 22:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.724 22:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.724 22:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.724 22:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.983 00:18:09.983 22:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:09.983 22:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:09.983 22:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.983 22:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.983 22:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.983 22:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.983 22:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.983 22:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.983 22:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:09.983 { 00:18:09.983 "cntlid": 123, 00:18:09.983 "qid": 0, 00:18:09.983 "state": "enabled", 00:18:09.983 "thread": "nvmf_tgt_poll_group_000", 00:18:09.983 "listen_address": { 00:18:09.983 "trtype": "TCP", 00:18:09.983 "adrfam": "IPv4", 00:18:09.983 "traddr": "10.0.0.2", 00:18:09.983 "trsvcid": "4420" 00:18:09.983 }, 00:18:09.983 "peer_address": { 00:18:09.983 "trtype": "TCP", 00:18:09.983 "adrfam": "IPv4", 00:18:09.983 "traddr": "10.0.0.1", 00:18:09.983 "trsvcid": "55396" 00:18:09.983 }, 00:18:09.983 "auth": { 00:18:09.983 "state": "completed", 00:18:09.983 "digest": "sha512", 00:18:09.983 "dhgroup": "ffdhe4096" 00:18:09.983 } 00:18:09.983 } 00:18:09.983 ]' 00:18:09.983 22:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:09.983 22:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:09.983 22:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:10.244 22:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:10.244 22:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:10.244 22:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.244 22:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.244 22:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.244 22:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YjE4MmQ4OThlNDFjZTNmNzY4OTEwMzE4YTk5YmMyNmaABYDs: --dhchap-ctrl-secret DHHC-1:02:YWVkYjhjZDA3Y2RiZTkyZDllYmIwMThlOGVkYTAwMzMxN2FjYWU4ZGM0OTNlZGQzFo0H9Q==: 00:18:10.814 22:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.814 22:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:10.814 22:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.814 22:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.815 22:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.815 22:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:10.815 22:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:10.815 22:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:11.074 22:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:18:11.074 22:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:11.074 22:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:11.074 22:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:11.074 22:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:11.074 22:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.074 22:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.074 22:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.074 22:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.074 22:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.074 22:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.074 22:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.334 00:18:11.334 22:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:11.334 22:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:11.334 22:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.594 22:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.594 22:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.594 22:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.594 22:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.594 22:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.594 22:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:11.594 { 00:18:11.594 "cntlid": 125, 00:18:11.594 "qid": 0, 00:18:11.594 "state": "enabled", 00:18:11.594 "thread": "nvmf_tgt_poll_group_000", 00:18:11.594 "listen_address": { 00:18:11.594 "trtype": "TCP", 00:18:11.594 "adrfam": "IPv4", 00:18:11.594 "traddr": "10.0.0.2", 00:18:11.594 "trsvcid": "4420" 00:18:11.594 }, 00:18:11.594 "peer_address": { 00:18:11.594 "trtype": "TCP", 00:18:11.594 "adrfam": "IPv4", 00:18:11.594 "traddr": "10.0.0.1", 00:18:11.594 "trsvcid": "55442" 00:18:11.594 }, 00:18:11.594 "auth": { 00:18:11.594 "state": "completed", 00:18:11.594 "digest": "sha512", 00:18:11.594 "dhgroup": "ffdhe4096" 00:18:11.594 } 00:18:11.594 } 00:18:11.594 ]' 00:18:11.594 22:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:11.594 22:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:11.594 22:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:11.594 22:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:11.594 22:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:11.594 22:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.594 22:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.594 22:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.854 22:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmFmYzFhNDE4NjJmZDEzYTEwYjlmYjAyMDBiZWZhMDk5MzA5YmQ0NDE5NzI5N2M2vB0mHA==: --dhchap-ctrl-secret DHHC-1:01:YzdiYjU2M2QyOGNjOTQ2ZjEzOWQzNDYxMDlkNmUyZmEIwIts: 00:18:12.424 22:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.424 22:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:12.424 22:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.424 22:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.424 22:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.424 22:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:12.424 22:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:12.424 22:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:12.685 22:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:18:12.685 22:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:12.685 22:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:12.685 22:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:12.685 22:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:12.685 22:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.685 22:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:12.685 22:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.685 22:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.685 22:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.685 22:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:12.685 22:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:12.945 00:18:12.945 22:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:12.945 22:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:12.945 22:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.945 22:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.945 22:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.945 22:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.945 22:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.945 22:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.945 22:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:12.945 { 00:18:12.945 "cntlid": 127, 00:18:12.945 "qid": 0, 00:18:12.945 "state": "enabled", 00:18:12.945 "thread": "nvmf_tgt_poll_group_000", 00:18:12.945 "listen_address": { 00:18:12.945 "trtype": "TCP", 00:18:12.945 "adrfam": "IPv4", 00:18:12.945 "traddr": "10.0.0.2", 00:18:12.945 "trsvcid": "4420" 00:18:12.945 }, 00:18:12.945 "peer_address": { 00:18:12.945 "trtype": "TCP", 00:18:12.945 "adrfam": "IPv4", 00:18:12.945 "traddr": "10.0.0.1", 00:18:12.945 "trsvcid": "55460" 00:18:12.945 }, 00:18:12.945 "auth": { 00:18:12.945 "state": "completed", 00:18:12.945 "digest": "sha512", 00:18:12.945 "dhgroup": "ffdhe4096" 00:18:12.945 } 00:18:12.945 } 00:18:12.945 ]' 00:18:12.945 22:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:13.205 22:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:13.205 22:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:13.205 22:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:13.205 22:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:13.205 22:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.205 22:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.205 22:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.465 22:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MzljOGEyM2ZmODkxMWFhNmExYjk5ZGM1YjM1YTEyNzJiZjMxOTMwZTc0YzUxNTA0NTZlNDJmMzc5ZWE5NDQ2N4vraJk=: 00:18:14.035 22:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.035 22:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:14.035 22:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.035 22:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.035 22:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.035 22:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:14.035 22:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:14.036 22:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:14.036 22:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:14.036 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:18:14.036 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:14.036 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:14.036 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:14.036 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:14.036 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.036 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.036 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.036 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.036 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.036 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.036 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.296 00:18:14.555 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:14.555 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.555 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:14.555 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.555 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.555 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.555 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.555 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.555 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:14.555 { 00:18:14.555 "cntlid": 129, 00:18:14.555 "qid": 0, 00:18:14.555 "state": "enabled", 00:18:14.555 "thread": "nvmf_tgt_poll_group_000", 00:18:14.555 "listen_address": { 00:18:14.555 "trtype": "TCP", 00:18:14.555 "adrfam": "IPv4", 00:18:14.555 "traddr": "10.0.0.2", 00:18:14.555 "trsvcid": "4420" 00:18:14.555 }, 00:18:14.555 "peer_address": { 00:18:14.555 "trtype": "TCP", 00:18:14.555 "adrfam": "IPv4", 00:18:14.555 "traddr": "10.0.0.1", 00:18:14.555 "trsvcid": "55494" 00:18:14.555 }, 00:18:14.555 "auth": { 00:18:14.555 "state": "completed", 00:18:14.555 "digest": "sha512", 00:18:14.555 "dhgroup": "ffdhe6144" 00:18:14.555 } 00:18:14.555 } 00:18:14.555 ]' 00:18:14.555 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:14.555 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:14.555 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:14.814 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:14.815 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:14.815 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.815 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.815 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.815 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NGE1NDdhNTQ5YTY5MmY2NDk1ZWIxYzY5MGMwMmU4M2YzMjQ4NWFlNjQzMzhmMzZi+WYYww==: --dhchap-ctrl-secret DHHC-1:03:ZjBjY2U2ZjgzZDA4NTE5MWJjY2JkYjhiOTE2ZWZjZmFhOGViOTI3YWUyZGMxZTliNTUxNjE0ZDE5YTFkZGNhNOKYb+M=: 00:18:15.384 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.384 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:15.384 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.384 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.384 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.384 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:15.384 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:15.384 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:15.643 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:18:15.643 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:15.643 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:15.643 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:15.643 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:15.643 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.644 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.644 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.644 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.644 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.644 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.644 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.903 00:18:15.903 22:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:15.903 22:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:15.903 22:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.163 22:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.163 22:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.163 22:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.163 22:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.163 22:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.163 22:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:16.163 { 00:18:16.163 "cntlid": 131, 00:18:16.163 "qid": 0, 00:18:16.163 "state": "enabled", 00:18:16.163 "thread": "nvmf_tgt_poll_group_000", 00:18:16.163 "listen_address": { 00:18:16.163 "trtype": "TCP", 00:18:16.163 "adrfam": "IPv4", 00:18:16.163 "traddr": "10.0.0.2", 00:18:16.163 "trsvcid": "4420" 00:18:16.163 }, 00:18:16.163 "peer_address": { 00:18:16.163 "trtype": "TCP", 00:18:16.163 "adrfam": "IPv4", 00:18:16.163 "traddr": "10.0.0.1", 00:18:16.163 "trsvcid": "55520" 00:18:16.163 }, 00:18:16.163 "auth": { 00:18:16.163 "state": "completed", 00:18:16.163 "digest": "sha512", 00:18:16.163 "dhgroup": "ffdhe6144" 00:18:16.163 } 00:18:16.163 } 00:18:16.163 ]' 00:18:16.163 22:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:16.163 22:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:16.163 22:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:16.163 22:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:16.163 22:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:16.423 22:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.423 22:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.423 22:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.423 22:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YjE4MmQ4OThlNDFjZTNmNzY4OTEwMzE4YTk5YmMyNmaABYDs: --dhchap-ctrl-secret DHHC-1:02:YWVkYjhjZDA3Y2RiZTkyZDllYmIwMThlOGVkYTAwMzMxN2FjYWU4ZGM0OTNlZGQzFo0H9Q==: 00:18:16.992 22:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.992 22:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:16.992 22:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.993 22:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.993 22:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.993 22:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:16.993 22:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:16.993 22:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:17.253 22:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:18:17.253 22:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:17.253 22:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:17.253 22:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:17.253 22:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:17.253 22:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.253 22:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.253 22:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.253 22:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.253 22:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.253 22:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.253 22:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.512 00:18:17.512 22:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:17.512 22:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:17.512 22:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.771 22:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.771 22:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.772 22:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.772 22:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.772 22:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.772 22:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:17.772 { 00:18:17.772 "cntlid": 133, 00:18:17.772 "qid": 0, 00:18:17.772 "state": "enabled", 00:18:17.772 "thread": "nvmf_tgt_poll_group_000", 00:18:17.772 "listen_address": { 00:18:17.772 "trtype": "TCP", 00:18:17.772 "adrfam": "IPv4", 00:18:17.772 "traddr": "10.0.0.2", 00:18:17.772 "trsvcid": "4420" 00:18:17.772 }, 00:18:17.772 "peer_address": { 00:18:17.772 "trtype": "TCP", 00:18:17.772 "adrfam": "IPv4", 00:18:17.772 "traddr": "10.0.0.1", 00:18:17.772 "trsvcid": "52718" 00:18:17.772 }, 00:18:17.772 "auth": { 00:18:17.772 "state": "completed", 00:18:17.772 "digest": "sha512", 00:18:17.772 "dhgroup": "ffdhe6144" 00:18:17.772 } 00:18:17.772 } 00:18:17.772 ]' 00:18:17.772 22:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:17.772 22:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:17.772 22:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:17.772 22:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:17.772 22:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:17.772 22:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.772 22:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.772 22:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.030 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmFmYzFhNDE4NjJmZDEzYTEwYjlmYjAyMDBiZWZhMDk5MzA5YmQ0NDE5NzI5N2M2vB0mHA==: --dhchap-ctrl-secret DHHC-1:01:YzdiYjU2M2QyOGNjOTQ2ZjEzOWQzNDYxMDlkNmUyZmEIwIts: 00:18:18.598 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.598 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:18.598 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.598 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.598 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.598 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:18.598 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:18.598 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:18.858 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:18:18.858 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:18.858 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:18.858 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:18.858 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:18.858 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.858 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:18.858 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.858 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.858 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.859 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:18.859 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:19.118 00:18:19.118 22:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:19.118 22:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:19.118 22:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.377 22:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.377 22:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.377 22:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.377 22:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.377 22:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.377 22:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:19.377 { 00:18:19.377 "cntlid": 135, 00:18:19.377 "qid": 0, 00:18:19.377 "state": "enabled", 00:18:19.377 "thread": "nvmf_tgt_poll_group_000", 00:18:19.377 "listen_address": { 00:18:19.377 "trtype": "TCP", 00:18:19.377 "adrfam": "IPv4", 00:18:19.377 "traddr": "10.0.0.2", 00:18:19.377 "trsvcid": "4420" 00:18:19.377 }, 00:18:19.377 "peer_address": { 00:18:19.377 "trtype": "TCP", 00:18:19.377 "adrfam": "IPv4", 00:18:19.377 "traddr": "10.0.0.1", 00:18:19.377 "trsvcid": "52752" 00:18:19.377 }, 00:18:19.377 "auth": { 00:18:19.377 "state": "completed", 00:18:19.377 "digest": "sha512", 00:18:19.377 "dhgroup": "ffdhe6144" 00:18:19.377 } 00:18:19.377 } 00:18:19.377 ]' 00:18:19.377 22:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:19.377 22:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:19.377 22:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:19.377 22:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:19.377 22:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:19.377 22:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.377 22:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.377 22:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.637 22:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MzljOGEyM2ZmODkxMWFhNmExYjk5ZGM1YjM1YTEyNzJiZjMxOTMwZTc0YzUxNTA0NTZlNDJmMzc5ZWE5NDQ2N4vraJk=: 00:18:20.206 22:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.206 22:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:20.206 22:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.206 22:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.206 22:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.206 22:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:20.206 22:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:20.206 22:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:20.206 22:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:20.206 22:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:18:20.206 22:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:20.206 22:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:20.206 22:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:20.206 22:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:20.206 22:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.207 22:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.207 22:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.207 22:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.466 22:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.466 22:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.466 22:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.725 00:18:20.725 22:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:20.725 22:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:20.725 22:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.984 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.984 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.984 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.984 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.984 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.984 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:20.984 { 00:18:20.984 "cntlid": 137, 00:18:20.984 "qid": 0, 00:18:20.984 "state": "enabled", 00:18:20.984 "thread": "nvmf_tgt_poll_group_000", 00:18:20.984 "listen_address": { 00:18:20.984 "trtype": "TCP", 00:18:20.984 "adrfam": "IPv4", 00:18:20.984 "traddr": "10.0.0.2", 00:18:20.984 "trsvcid": "4420" 00:18:20.984 }, 00:18:20.984 "peer_address": { 00:18:20.984 "trtype": "TCP", 00:18:20.984 "adrfam": "IPv4", 00:18:20.984 "traddr": "10.0.0.1", 00:18:20.984 "trsvcid": "52782" 00:18:20.984 }, 00:18:20.984 "auth": { 00:18:20.984 "state": "completed", 00:18:20.984 "digest": "sha512", 00:18:20.984 "dhgroup": "ffdhe8192" 00:18:20.984 } 00:18:20.984 } 00:18:20.984 ]' 00:18:20.984 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:20.984 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:20.984 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:20.984 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:20.984 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:20.984 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.984 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.984 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.244 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NGE1NDdhNTQ5YTY5MmY2NDk1ZWIxYzY5MGMwMmU4M2YzMjQ4NWFlNjQzMzhmMzZi+WYYww==: --dhchap-ctrl-secret DHHC-1:03:ZjBjY2U2ZjgzZDA4NTE5MWJjY2JkYjhiOTE2ZWZjZmFhOGViOTI3YWUyZGMxZTliNTUxNjE0ZDE5YTFkZGNhNOKYb+M=: 00:18:21.813 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.813 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.813 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:21.813 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.813 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.813 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.813 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:21.813 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:21.813 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:22.071 22:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:18:22.071 22:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:22.071 22:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:22.071 22:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:22.071 22:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:22.071 22:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.071 22:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.071 22:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.071 22:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.071 22:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.071 22:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.071 22:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.329 00:18:22.587 22:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:22.587 22:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:22.587 22:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.587 22:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.587 22:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.587 22:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.587 22:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.587 22:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.587 22:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:22.587 { 00:18:22.587 "cntlid": 139, 00:18:22.588 "qid": 0, 00:18:22.588 "state": "enabled", 00:18:22.588 "thread": "nvmf_tgt_poll_group_000", 00:18:22.588 "listen_address": { 00:18:22.588 "trtype": "TCP", 00:18:22.588 "adrfam": "IPv4", 00:18:22.588 "traddr": "10.0.0.2", 00:18:22.588 "trsvcid": "4420" 00:18:22.588 }, 00:18:22.588 "peer_address": { 00:18:22.588 "trtype": "TCP", 00:18:22.588 "adrfam": "IPv4", 00:18:22.588 "traddr": "10.0.0.1", 00:18:22.588 "trsvcid": "52810" 00:18:22.588 }, 00:18:22.588 "auth": { 00:18:22.588 "state": "completed", 00:18:22.588 "digest": "sha512", 00:18:22.588 "dhgroup": "ffdhe8192" 00:18:22.588 } 00:18:22.588 } 00:18:22.588 ]' 00:18:22.588 22:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:22.588 22:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:22.588 22:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:22.846 22:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:22.846 22:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:22.846 22:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.846 22:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.846 22:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.846 22:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YjE4MmQ4OThlNDFjZTNmNzY4OTEwMzE4YTk5YmMyNmaABYDs: --dhchap-ctrl-secret DHHC-1:02:YWVkYjhjZDA3Y2RiZTkyZDllYmIwMThlOGVkYTAwMzMxN2FjYWU4ZGM0OTNlZGQzFo0H9Q==: 00:18:23.414 22:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.414 22:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:23.414 22:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.414 22:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.414 22:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.414 22:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:23.414 22:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:23.414 22:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:23.674 22:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:18:23.674 22:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:23.674 22:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:23.674 22:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:23.674 22:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:23.674 22:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.674 22:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.674 22:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.674 22:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.674 22:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.674 22:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.674 22:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.308 00:18:24.308 22:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:24.308 22:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:24.308 22:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.308 22:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.309 22:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.309 22:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.309 22:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.309 22:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.309 22:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:24.309 { 00:18:24.309 "cntlid": 141, 00:18:24.309 "qid": 0, 00:18:24.309 "state": "enabled", 00:18:24.309 "thread": "nvmf_tgt_poll_group_000", 00:18:24.309 "listen_address": { 00:18:24.309 "trtype": "TCP", 00:18:24.309 "adrfam": "IPv4", 00:18:24.309 "traddr": "10.0.0.2", 00:18:24.309 "trsvcid": "4420" 00:18:24.309 }, 00:18:24.309 "peer_address": { 00:18:24.309 "trtype": "TCP", 00:18:24.309 "adrfam": "IPv4", 00:18:24.309 "traddr": "10.0.0.1", 00:18:24.309 "trsvcid": "52854" 00:18:24.309 }, 00:18:24.309 "auth": { 00:18:24.309 "state": "completed", 00:18:24.309 "digest": "sha512", 00:18:24.309 "dhgroup": "ffdhe8192" 00:18:24.309 } 00:18:24.309 } 00:18:24.309 ]' 00:18:24.309 22:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:24.309 22:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:24.309 22:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:24.567 22:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:24.567 22:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:24.567 22:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.567 22:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.567 22:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.567 22:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmFmYzFhNDE4NjJmZDEzYTEwYjlmYjAyMDBiZWZhMDk5MzA5YmQ0NDE5NzI5N2M2vB0mHA==: --dhchap-ctrl-secret DHHC-1:01:YzdiYjU2M2QyOGNjOTQ2ZjEzOWQzNDYxMDlkNmUyZmEIwIts: 00:18:25.137 22:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.137 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.137 22:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:25.137 22:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.137 22:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.137 22:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.137 22:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:25.137 22:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:25.137 22:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:25.396 22:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:18:25.396 22:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:25.396 22:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:25.396 22:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:25.396 22:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:25.396 22:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.396 22:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:25.396 22:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.396 22:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.396 22:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.396 22:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:25.396 22:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:25.965 00:18:25.965 22:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:25.965 22:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:25.965 22:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.965 22:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.965 22:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.965 22:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.965 22:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.965 22:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.965 22:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:25.965 { 00:18:25.965 "cntlid": 143, 00:18:25.965 "qid": 0, 00:18:25.965 "state": "enabled", 00:18:25.965 "thread": "nvmf_tgt_poll_group_000", 00:18:25.965 "listen_address": { 00:18:25.965 "trtype": "TCP", 00:18:25.965 "adrfam": "IPv4", 00:18:25.965 "traddr": "10.0.0.2", 00:18:25.965 "trsvcid": "4420" 00:18:25.965 }, 00:18:25.965 "peer_address": { 00:18:25.965 "trtype": "TCP", 00:18:25.965 "adrfam": "IPv4", 00:18:25.965 "traddr": "10.0.0.1", 00:18:25.965 "trsvcid": "52890" 00:18:25.965 }, 00:18:25.965 "auth": { 00:18:25.965 "state": "completed", 00:18:25.965 "digest": "sha512", 00:18:25.965 "dhgroup": "ffdhe8192" 00:18:25.965 } 00:18:25.965 } 00:18:25.965 ]' 00:18:25.965 22:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:26.224 22:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:26.224 22:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:26.224 22:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:26.224 22:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:26.224 22:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.224 22:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.224 22:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.483 22:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MzljOGEyM2ZmODkxMWFhNmExYjk5ZGM1YjM1YTEyNzJiZjMxOTMwZTc0YzUxNTA0NTZlNDJmMzc5ZWE5NDQ2N4vraJk=: 00:18:27.052 22:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.052 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.052 22:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:27.052 22:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.052 22:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.052 22:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.052 22:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:27.052 22:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:18:27.052 22:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:27.052 22:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:27.052 22:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:27.052 22:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:27.052 22:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:18:27.052 22:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:27.052 22:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:27.052 22:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:27.052 22:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:27.052 22:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.052 22:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.052 22:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.052 22:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.052 22:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.052 22:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.052 22:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.619 00:18:27.619 22:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:27.619 22:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:27.619 22:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.619 22:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.880 22:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.880 22:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.880 22:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.880 22:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.880 22:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:27.880 { 00:18:27.880 "cntlid": 145, 00:18:27.880 "qid": 0, 00:18:27.880 "state": "enabled", 00:18:27.880 "thread": "nvmf_tgt_poll_group_000", 00:18:27.880 "listen_address": { 00:18:27.880 "trtype": "TCP", 00:18:27.880 "adrfam": "IPv4", 00:18:27.880 "traddr": "10.0.0.2", 00:18:27.880 "trsvcid": "4420" 00:18:27.880 }, 00:18:27.880 "peer_address": { 00:18:27.880 "trtype": "TCP", 00:18:27.880 "adrfam": "IPv4", 00:18:27.880 "traddr": "10.0.0.1", 00:18:27.880 "trsvcid": "48844" 00:18:27.880 }, 00:18:27.880 "auth": { 00:18:27.880 "state": "completed", 00:18:27.880 "digest": "sha512", 00:18:27.880 "dhgroup": "ffdhe8192" 00:18:27.880 } 00:18:27.880 } 00:18:27.880 ]' 00:18:27.880 22:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:27.880 22:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:27.880 22:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:27.880 22:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:27.880 22:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:27.880 22:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.880 22:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.880 22:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.138 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NGE1NDdhNTQ5YTY5MmY2NDk1ZWIxYzY5MGMwMmU4M2YzMjQ4NWFlNjQzMzhmMzZi+WYYww==: --dhchap-ctrl-secret DHHC-1:03:ZjBjY2U2ZjgzZDA4NTE5MWJjY2JkYjhiOTE2ZWZjZmFhOGViOTI3YWUyZGMxZTliNTUxNjE0ZDE5YTFkZGNhNOKYb+M=: 00:18:28.705 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.705 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:28.705 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.705 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.705 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.705 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:18:28.705 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.705 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.705 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.705 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:28.705 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:28.705 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:28.705 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:28.705 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:28.705 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:28.706 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:28.706 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:28.706 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:28.964 request: 00:18:28.964 { 00:18:28.964 "name": "nvme0", 00:18:28.964 "trtype": "tcp", 00:18:28.964 "traddr": "10.0.0.2", 00:18:28.964 "adrfam": "ipv4", 00:18:28.964 "trsvcid": "4420", 00:18:28.964 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:28.964 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:18:28.964 "prchk_reftag": false, 00:18:28.964 "prchk_guard": false, 00:18:28.964 "hdgst": false, 00:18:28.964 "ddgst": false, 00:18:28.964 "dhchap_key": "key2", 00:18:28.965 "method": "bdev_nvme_attach_controller", 00:18:28.965 "req_id": 1 00:18:28.965 } 00:18:28.965 Got JSON-RPC error response 00:18:28.965 response: 00:18:28.965 { 00:18:28.965 "code": -5, 00:18:28.965 "message": "Input/output error" 00:18:28.965 } 00:18:28.965 22:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:28.965 22:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:28.965 22:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:28.965 22:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:28.965 22:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:28.965 22:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.965 22:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.965 22:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.965 22:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.965 22:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.965 22:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.965 22:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.965 22:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:28.965 22:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:28.965 22:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:28.965 22:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:28.965 22:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:28.965 22:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:28.965 22:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:28.965 22:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:28.965 22:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:29.532 request: 00:18:29.532 { 00:18:29.532 "name": "nvme0", 00:18:29.532 "trtype": "tcp", 00:18:29.532 "traddr": "10.0.0.2", 00:18:29.532 "adrfam": "ipv4", 00:18:29.532 "trsvcid": "4420", 00:18:29.532 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:29.532 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:18:29.532 "prchk_reftag": false, 00:18:29.532 "prchk_guard": false, 00:18:29.532 "hdgst": false, 00:18:29.532 "ddgst": false, 00:18:29.532 "dhchap_key": "key1", 00:18:29.532 "dhchap_ctrlr_key": "ckey2", 00:18:29.532 "method": "bdev_nvme_attach_controller", 00:18:29.532 "req_id": 1 00:18:29.532 } 00:18:29.532 Got JSON-RPC error response 00:18:29.532 response: 00:18:29.532 { 00:18:29.532 "code": -5, 00:18:29.532 "message": "Input/output error" 00:18:29.532 } 00:18:29.532 22:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:29.532 22:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:29.532 22:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:29.532 22:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:29.532 22:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:29.532 22:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.532 22:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.532 22:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.532 22:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:18:29.532 22:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.532 22:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.532 22:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.532 22:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.532 22:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:29.532 22:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.532 22:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:29.532 22:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:29.532 22:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:29.532 22:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:29.532 22:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.532 22:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.790 request: 00:18:29.790 { 00:18:29.790 "name": "nvme0", 00:18:29.790 "trtype": "tcp", 00:18:29.790 "traddr": "10.0.0.2", 00:18:29.790 "adrfam": "ipv4", 00:18:29.790 "trsvcid": "4420", 00:18:29.790 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:29.790 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:18:29.790 "prchk_reftag": false, 00:18:29.790 "prchk_guard": false, 00:18:29.790 "hdgst": false, 00:18:29.790 "ddgst": false, 00:18:29.790 "dhchap_key": "key1", 00:18:29.790 "dhchap_ctrlr_key": "ckey1", 00:18:29.790 "method": "bdev_nvme_attach_controller", 00:18:29.790 "req_id": 1 00:18:29.790 } 00:18:29.790 Got JSON-RPC error response 00:18:29.791 response: 00:18:29.791 { 00:18:29.791 "code": -5, 00:18:29.791 "message": "Input/output error" 00:18:29.791 } 00:18:30.049 22:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:30.049 22:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:30.049 22:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:30.049 22:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:30.049 22:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:30.049 22:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.049 22:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.049 22:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.049 22:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 2681996 00:18:30.049 22:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2681996 ']' 00:18:30.049 22:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2681996 00:18:30.049 22:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:30.049 22:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:30.049 22:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2681996 00:18:30.049 22:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:30.049 22:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:30.049 22:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2681996' 00:18:30.049 killing process with pid 2681996 00:18:30.049 22:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2681996 00:18:30.049 22:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2681996 00:18:30.309 22:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:30.309 22:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:30.309 22:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:30.309 22:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.309 22:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2703210 00:18:30.309 22:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2703210 00:18:30.309 22:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:30.309 22:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2703210 ']' 00:18:30.309 22:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:30.309 22:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:30.309 22:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:30.309 22:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:30.309 22:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.246 22:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:31.246 22:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:31.246 22:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:31.246 22:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:31.246 22:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.246 22:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:31.246 22:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:31.246 22:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 2703210 00:18:31.246 22:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2703210 ']' 00:18:31.246 22:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.246 22:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:31.246 22:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.246 22:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:31.246 22:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.246 22:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:31.246 22:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:31.246 22:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:18:31.246 22:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.246 22:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.506 22:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.506 22:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:18:31.506 22:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:31.506 22:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:31.506 22:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:31.506 22:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:31.506 22:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.506 22:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:31.506 22:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.506 22:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.506 22:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.506 22:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:31.506 22:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:31.765 00:18:31.765 22:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:31.765 22:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:31.765 22:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.025 22:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.025 22:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.025 22:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.025 22:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.025 22:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.025 22:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:32.025 { 00:18:32.025 "cntlid": 1, 00:18:32.025 "qid": 0, 00:18:32.025 "state": "enabled", 00:18:32.025 "thread": "nvmf_tgt_poll_group_000", 00:18:32.025 "listen_address": { 00:18:32.025 "trtype": "TCP", 00:18:32.025 "adrfam": "IPv4", 00:18:32.025 "traddr": "10.0.0.2", 00:18:32.025 "trsvcid": "4420" 00:18:32.025 }, 00:18:32.025 "peer_address": { 00:18:32.025 "trtype": "TCP", 00:18:32.025 "adrfam": "IPv4", 00:18:32.025 "traddr": "10.0.0.1", 00:18:32.025 "trsvcid": "48888" 00:18:32.025 }, 00:18:32.025 "auth": { 00:18:32.025 "state": "completed", 00:18:32.025 "digest": "sha512", 00:18:32.025 "dhgroup": "ffdhe8192" 00:18:32.025 } 00:18:32.025 } 00:18:32.025 ]' 00:18:32.025 22:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:32.025 22:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:32.025 22:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:32.284 22:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:32.284 22:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:32.284 22:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.284 22:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.284 22:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.284 22:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MzljOGEyM2ZmODkxMWFhNmExYjk5ZGM1YjM1YTEyNzJiZjMxOTMwZTc0YzUxNTA0NTZlNDJmMzc5ZWE5NDQ2N4vraJk=: 00:18:32.851 22:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.851 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:32.851 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.851 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.851 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.851 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:32.851 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.851 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.851 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.851 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:32.851 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:33.110 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:33.110 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:33.110 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:33.110 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:33.110 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:33.110 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:33.110 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:33.110 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:33.110 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:33.370 request: 00:18:33.370 { 00:18:33.370 "name": "nvme0", 00:18:33.370 "trtype": "tcp", 00:18:33.370 "traddr": "10.0.0.2", 00:18:33.370 "adrfam": "ipv4", 00:18:33.370 "trsvcid": "4420", 00:18:33.370 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:33.370 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:18:33.370 "prchk_reftag": false, 00:18:33.370 "prchk_guard": false, 00:18:33.370 "hdgst": false, 00:18:33.370 "ddgst": false, 00:18:33.370 "dhchap_key": "key3", 00:18:33.370 "method": "bdev_nvme_attach_controller", 00:18:33.370 "req_id": 1 00:18:33.370 } 00:18:33.370 Got JSON-RPC error response 00:18:33.370 response: 00:18:33.370 { 00:18:33.370 "code": -5, 00:18:33.370 "message": "Input/output error" 00:18:33.370 } 00:18:33.370 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:33.370 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:33.370 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:33.370 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:33.370 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:18:33.370 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:18:33.370 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:33.370 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:33.370 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:33.370 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:33.370 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:33.370 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:33.370 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:33.370 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:33.370 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:33.370 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:33.370 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:33.630 request: 00:18:33.630 { 00:18:33.630 "name": "nvme0", 00:18:33.630 "trtype": "tcp", 00:18:33.630 "traddr": "10.0.0.2", 00:18:33.630 "adrfam": "ipv4", 00:18:33.630 "trsvcid": "4420", 00:18:33.630 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:33.630 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:18:33.630 "prchk_reftag": false, 00:18:33.630 "prchk_guard": false, 00:18:33.630 "hdgst": false, 00:18:33.630 "ddgst": false, 00:18:33.630 "dhchap_key": "key3", 00:18:33.630 "method": "bdev_nvme_attach_controller", 00:18:33.630 "req_id": 1 00:18:33.630 } 00:18:33.630 Got JSON-RPC error response 00:18:33.630 response: 00:18:33.630 { 00:18:33.630 "code": -5, 00:18:33.630 "message": "Input/output error" 00:18:33.630 } 00:18:33.630 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:33.630 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:33.630 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:33.630 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:33.630 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:33.630 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:18:33.630 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:33.630 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:33.630 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:33.630 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:33.889 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:33.889 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.889 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.889 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.890 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:33.890 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.890 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.890 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.890 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:33.890 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:33.890 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:33.890 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:33.890 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:33.890 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:33.890 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:33.890 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:33.890 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:34.149 request: 00:18:34.149 { 00:18:34.149 "name": "nvme0", 00:18:34.149 "trtype": "tcp", 00:18:34.149 "traddr": "10.0.0.2", 00:18:34.149 "adrfam": "ipv4", 00:18:34.149 "trsvcid": "4420", 00:18:34.149 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:34.149 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:18:34.149 "prchk_reftag": false, 00:18:34.149 "prchk_guard": false, 00:18:34.149 "hdgst": false, 00:18:34.149 "ddgst": false, 00:18:34.149 "dhchap_key": "key0", 00:18:34.149 "dhchap_ctrlr_key": "key1", 00:18:34.149 "method": "bdev_nvme_attach_controller", 00:18:34.149 "req_id": 1 00:18:34.149 } 00:18:34.149 Got JSON-RPC error response 00:18:34.149 response: 00:18:34.149 { 00:18:34.149 "code": -5, 00:18:34.149 "message": "Input/output error" 00:18:34.149 } 00:18:34.149 22:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:34.149 22:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:34.149 22:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:34.149 22:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:34.150 22:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:34.150 22:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:34.150 00:18:34.150 22:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:18:34.150 22:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.150 22:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:18:34.409 22:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.409 22:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.409 22:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.669 22:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:18:34.669 22:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:18:34.669 22:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2682247 00:18:34.669 22:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2682247 ']' 00:18:34.669 22:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2682247 00:18:34.669 22:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:34.669 22:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:34.669 22:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2682247 00:18:34.669 22:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:34.669 22:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:34.669 22:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2682247' 00:18:34.669 killing process with pid 2682247 00:18:34.669 22:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2682247 00:18:34.669 22:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2682247 00:18:34.929 22:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:34.929 22:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:34.929 22:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:18:34.929 22:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:34.929 22:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:18:34.929 22:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:34.929 22:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:34.929 rmmod nvme_tcp 00:18:34.929 rmmod nvme_fabrics 00:18:34.929 rmmod nvme_keyring 00:18:34.929 22:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:34.929 22:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:18:34.929 22:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:18:34.929 22:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2703210 ']' 00:18:34.929 22:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2703210 00:18:34.929 22:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2703210 ']' 00:18:34.929 22:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2703210 00:18:35.189 22:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:35.189 22:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:35.190 22:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2703210 00:18:35.190 22:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:35.190 22:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:35.190 22:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2703210' 00:18:35.190 killing process with pid 2703210 00:18:35.190 22:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2703210 00:18:35.190 22:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2703210 00:18:35.190 22:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:35.190 22:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:35.190 22:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:35.190 22:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:35.190 22:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:35.190 22:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.190 22:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:35.190 22:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.732 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:37.732 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.9ah /tmp/spdk.key-sha256.Xhd /tmp/spdk.key-sha384.GPm /tmp/spdk.key-sha512.6F1 /tmp/spdk.key-sha512.QFG /tmp/spdk.key-sha384.ale /tmp/spdk.key-sha256.j5r '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:37.732 00:18:37.732 real 2m10.339s 00:18:37.732 user 4m49.348s 00:18:37.732 sys 0m29.374s 00:18:37.732 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:37.732 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.732 ************************************ 00:18:37.732 END TEST nvmf_auth_target 00:18:37.732 ************************************ 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:37.733 ************************************ 00:18:37.733 START TEST nvmf_bdevio_no_huge 00:18:37.733 ************************************ 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:37.733 * Looking for test storage... 00:18:37.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:18:37.733 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:44.316 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:44.316 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:44.316 Found net devices under 0000:af:00.0: cvl_0_0 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:44.316 Found net devices under 0000:af:00.1: cvl_0_1 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:44.316 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:44.316 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:18:44.316 00:18:44.316 --- 10.0.0.2 ping statistics --- 00:18:44.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:44.316 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:44.316 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:44.316 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:18:44.316 00:18:44.316 --- 10.0.0.1 ping statistics --- 00:18:44.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:44.316 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:44.316 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:44.317 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:44.317 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:44.317 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:44.317 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:44.317 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:44.317 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:44.317 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:44.317 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:44.577 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=2707754 00:18:44.577 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:44.577 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 2707754 00:18:44.577 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 2707754 ']' 00:18:44.577 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.577 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:44.577 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.577 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:44.577 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:44.577 [2024-07-24 22:06:23.584399] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:18:44.577 [2024-07-24 22:06:23.584455] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:44.577 [2024-07-24 22:06:23.665325] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:44.577 [2024-07-24 22:06:23.763128] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:44.577 [2024-07-24 22:06:23.763166] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:44.577 [2024-07-24 22:06:23.763175] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:44.577 [2024-07-24 22:06:23.763183] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:44.577 [2024-07-24 22:06:23.763190] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:44.577 [2024-07-24 22:06:23.763311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:44.577 [2024-07-24 22:06:23.763423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:18:44.577 [2024-07-24 22:06:23.763510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:44.577 [2024-07-24 22:06:23.763511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:18:45.515 22:06:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:45.515 22:06:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:18:45.515 22:06:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:45.515 22:06:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:45.515 22:06:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:45.515 22:06:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:45.515 22:06:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:45.515 22:06:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.515 22:06:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:45.515 [2024-07-24 22:06:24.430988] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:45.515 22:06:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.515 22:06:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:45.515 22:06:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.515 22:06:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:45.515 Malloc0 00:18:45.515 22:06:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.515 22:06:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:45.515 22:06:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.515 22:06:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:45.515 22:06:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.515 22:06:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:45.515 22:06:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.515 22:06:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:45.515 22:06:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.515 22:06:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:45.516 22:06:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.516 22:06:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:45.516 [2024-07-24 22:06:24.467553] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:45.516 22:06:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.516 22:06:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:45.516 22:06:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:45.516 22:06:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:18:45.516 22:06:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:18:45.516 22:06:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:45.516 22:06:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:45.516 { 00:18:45.516 "params": { 00:18:45.516 "name": "Nvme$subsystem", 00:18:45.516 "trtype": "$TEST_TRANSPORT", 00:18:45.516 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:45.516 "adrfam": "ipv4", 00:18:45.516 "trsvcid": "$NVMF_PORT", 00:18:45.516 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:45.516 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:45.516 "hdgst": ${hdgst:-false}, 00:18:45.516 "ddgst": ${ddgst:-false} 00:18:45.516 }, 00:18:45.516 "method": "bdev_nvme_attach_controller" 00:18:45.516 } 00:18:45.516 EOF 00:18:45.516 )") 00:18:45.516 22:06:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:18:45.516 22:06:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:18:45.516 22:06:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:18:45.516 22:06:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:45.516 "params": { 00:18:45.516 "name": "Nvme1", 00:18:45.516 "trtype": "tcp", 00:18:45.516 "traddr": "10.0.0.2", 00:18:45.516 "adrfam": "ipv4", 00:18:45.516 "trsvcid": "4420", 00:18:45.516 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:45.516 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:45.516 "hdgst": false, 00:18:45.516 "ddgst": false 00:18:45.516 }, 00:18:45.516 "method": "bdev_nvme_attach_controller" 00:18:45.516 }' 00:18:45.516 [2024-07-24 22:06:24.520960] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:18:45.516 [2024-07-24 22:06:24.521012] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2707881 ] 00:18:45.516 [2024-07-24 22:06:24.595670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:45.516 [2024-07-24 22:06:24.695889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:45.516 [2024-07-24 22:06:24.695983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:45.516 [2024-07-24 22:06:24.695985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.775 I/O targets: 00:18:45.775 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:45.775 00:18:45.775 00:18:45.775 CUnit - A unit testing framework for C - Version 2.1-3 00:18:45.775 http://cunit.sourceforge.net/ 00:18:45.775 00:18:45.775 00:18:45.775 Suite: bdevio tests on: Nvme1n1 00:18:45.775 Test: blockdev write read block ...passed 00:18:46.034 Test: blockdev write zeroes read block ...passed 00:18:46.034 Test: blockdev write zeroes read no split ...passed 00:18:46.034 Test: blockdev write zeroes read split ...passed 00:18:46.034 Test: blockdev write zeroes read split partial ...passed 00:18:46.034 Test: blockdev reset ...[2024-07-24 22:06:25.155032] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:46.034 [2024-07-24 22:06:25.155096] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ab670 (9): Bad file descriptor 00:18:46.294 [2024-07-24 22:06:25.253837] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:46.294 passed 00:18:46.294 Test: blockdev write read 8 blocks ...passed 00:18:46.294 Test: blockdev write read size > 128k ...passed 00:18:46.294 Test: blockdev write read invalid size ...passed 00:18:46.294 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:46.294 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:46.294 Test: blockdev write read max offset ...passed 00:18:46.294 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:46.294 Test: blockdev writev readv 8 blocks ...passed 00:18:46.294 Test: blockdev writev readv 30 x 1block ...passed 00:18:46.294 Test: blockdev writev readv block ...passed 00:18:46.294 Test: blockdev writev readv size > 128k ...passed 00:18:46.294 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:46.294 Test: blockdev comparev and writev ...[2024-07-24 22:06:25.429455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:46.294 [2024-07-24 22:06:25.429486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:46.294 [2024-07-24 22:06:25.429502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:46.294 [2024-07-24 22:06:25.429512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.294 [2024-07-24 22:06:25.429842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:46.294 [2024-07-24 22:06:25.429855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:46.294 [2024-07-24 22:06:25.429869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:46.294 [2024-07-24 22:06:25.429879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:46.294 [2024-07-24 22:06:25.430190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:46.294 [2024-07-24 22:06:25.430202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:46.294 [2024-07-24 22:06:25.430216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:46.294 [2024-07-24 22:06:25.430226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:46.294 [2024-07-24 22:06:25.430546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:46.294 [2024-07-24 22:06:25.430559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:46.294 [2024-07-24 22:06:25.430572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:46.294 [2024-07-24 22:06:25.430583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:46.294 passed 00:18:46.553 Test: blockdev nvme passthru rw ...passed 00:18:46.553 Test: blockdev nvme passthru vendor specific ...[2024-07-24 22:06:25.513144] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:46.553 [2024-07-24 22:06:25.513161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:46.553 [2024-07-24 22:06:25.513350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:46.553 [2024-07-24 22:06:25.513362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:46.553 [2024-07-24 22:06:25.513551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:46.553 [2024-07-24 22:06:25.513563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:46.553 [2024-07-24 22:06:25.513761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:46.553 [2024-07-24 22:06:25.513775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:46.553 passed 00:18:46.553 Test: blockdev nvme admin passthru ...passed 00:18:46.553 Test: blockdev copy ...passed 00:18:46.553 00:18:46.553 Run Summary: Type Total Ran Passed Failed Inactive 00:18:46.553 suites 1 1 n/a 0 0 00:18:46.553 tests 23 23 23 0 0 00:18:46.553 asserts 152 152 152 0 n/a 00:18:46.553 00:18:46.553 Elapsed time = 1.267 seconds 00:18:46.812 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:46.812 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.812 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:46.812 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.813 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:46.813 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:46.813 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:46.813 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:18:46.813 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:46.813 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:18:46.813 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:46.813 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:46.813 rmmod nvme_tcp 00:18:46.813 rmmod nvme_fabrics 00:18:46.813 rmmod nvme_keyring 00:18:46.813 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:46.813 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:18:46.813 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:18:46.813 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 2707754 ']' 00:18:46.813 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 2707754 00:18:46.813 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 2707754 ']' 00:18:46.813 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 2707754 00:18:46.813 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:18:46.813 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:46.813 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2707754 00:18:47.072 22:06:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:18:47.072 22:06:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:18:47.072 22:06:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2707754' 00:18:47.072 killing process with pid 2707754 00:18:47.072 22:06:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 2707754 00:18:47.072 22:06:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 2707754 00:18:47.331 22:06:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:47.331 22:06:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:47.331 22:06:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:47.331 22:06:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:47.331 22:06:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:47.331 22:06:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.331 22:06:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:47.331 22:06:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:49.869 00:18:49.869 real 0m11.936s 00:18:49.869 user 0m14.437s 00:18:49.869 sys 0m6.376s 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:49.869 ************************************ 00:18:49.869 END TEST nvmf_bdevio_no_huge 00:18:49.869 ************************************ 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:49.869 ************************************ 00:18:49.869 START TEST nvmf_tls 00:18:49.869 ************************************ 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:49.869 * Looking for test storage... 00:18:49.869 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:18:49.869 22:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.442 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:56.442 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:18:56.442 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:56.442 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:56.442 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:56.442 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:56.442 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:56.442 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:18:56.442 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:56.442 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:18:56.442 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:18:56.442 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:18:56.442 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:18:56.442 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:18:56.442 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:18:56.442 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:56.442 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:56.442 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:56.442 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:56.442 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:56.442 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:56.442 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:56.442 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:56.442 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:56.442 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:56.442 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:56.442 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:56.442 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:56.442 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:56.442 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:56.442 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:56.442 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:56.442 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:56.442 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:56.442 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:56.442 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:56.443 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:56.443 Found net devices under 0000:af:00.0: cvl_0_0 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:56.443 Found net devices under 0000:af:00.1: cvl_0_1 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:56.443 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:56.443 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:18:56.443 00:18:56.443 --- 10.0.0.2 ping statistics --- 00:18:56.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:56.443 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:56.443 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:56.443 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:18:56.443 00:18:56.443 --- 10.0.0.1 ping statistics --- 00:18:56.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:56.443 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2711817 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2711817 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2711817 ']' 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:56.443 22:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.443 [2024-07-24 22:06:35.581170] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:18:56.443 [2024-07-24 22:06:35.581222] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:56.443 EAL: No free 2048 kB hugepages reported on node 1 00:18:56.444 [2024-07-24 22:06:35.652902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.704 [2024-07-24 22:06:35.729593] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:56.704 [2024-07-24 22:06:35.729629] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:56.704 [2024-07-24 22:06:35.729639] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:56.704 [2024-07-24 22:06:35.729647] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:56.704 [2024-07-24 22:06:35.729655] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:56.704 [2024-07-24 22:06:35.729681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:57.282 22:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:57.282 22:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:57.282 22:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:57.282 22:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:57.282 22:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:57.282 22:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:57.282 22:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:18:57.282 22:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:57.540 true 00:18:57.540 22:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:57.540 22:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:18:57.799 22:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:18:57.799 22:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:18:57.799 22:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:57.799 22:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:57.799 22:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:18:58.058 22:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:18:58.058 22:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:18:58.058 22:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:58.058 22:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:58.058 22:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:18:58.317 22:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:18:58.317 22:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:18:58.317 22:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:58.317 22:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:18:58.576 22:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:18:58.576 22:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:18:58.576 22:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:58.576 22:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:58.576 22:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:18:58.835 22:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:18:58.835 22:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:18:58.835 22:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:59.094 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:59.094 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:18:59.094 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:18:59.094 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:18:59.094 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:59.094 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:59.094 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:59.095 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:59.095 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:18:59.095 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:59.095 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:59.095 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:59.095 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:59.095 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:59.095 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:59.095 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:59.095 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:18:59.095 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:59.095 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:59.354 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:59.354 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:18:59.354 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.iGB6Arf8r8 00:18:59.354 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:59.354 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.K5LIWACxIb 00:18:59.354 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:59.354 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:59.354 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.iGB6Arf8r8 00:18:59.354 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.K5LIWACxIb 00:18:59.354 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:59.354 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:59.613 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.iGB6Arf8r8 00:18:59.613 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.iGB6Arf8r8 00:18:59.613 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:59.872 [2024-07-24 22:06:38.888829] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:59.872 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:59.872 22:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:00.132 [2024-07-24 22:06:39.201624] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:00.132 [2024-07-24 22:06:39.201824] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:00.132 22:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:00.412 malloc0 00:19:00.412 22:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:00.412 22:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iGB6Arf8r8 00:19:00.675 [2024-07-24 22:06:39.675281] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:00.675 22:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.iGB6Arf8r8 00:19:00.675 EAL: No free 2048 kB hugepages reported on node 1 00:19:10.660 Initializing NVMe Controllers 00:19:10.660 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:10.660 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:10.660 Initialization complete. Launching workers. 00:19:10.660 ======================================================== 00:19:10.660 Latency(us) 00:19:10.660 Device Information : IOPS MiB/s Average min max 00:19:10.660 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16342.10 63.84 3916.72 819.41 5847.15 00:19:10.660 ======================================================== 00:19:10.660 Total : 16342.10 63.84 3916.72 819.41 5847.15 00:19:10.660 00:19:10.660 22:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iGB6Arf8r8 00:19:10.660 22:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:10.660 22:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:10.660 22:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:10.660 22:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.iGB6Arf8r8' 00:19:10.660 22:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:10.660 22:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2714247 00:19:10.660 22:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:10.660 22:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:10.660 22:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2714247 /var/tmp/bdevperf.sock 00:19:10.660 22:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2714247 ']' 00:19:10.660 22:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:10.660 22:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:10.660 22:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:10.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:10.660 22:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:10.660 22:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:10.660 [2024-07-24 22:06:49.837299] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:19:10.660 [2024-07-24 22:06:49.837351] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2714247 ] 00:19:10.660 EAL: No free 2048 kB hugepages reported on node 1 00:19:10.919 [2024-07-24 22:06:49.903247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.919 [2024-07-24 22:06:49.978279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:11.488 22:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:11.488 22:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:11.488 22:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iGB6Arf8r8 00:19:11.747 [2024-07-24 22:06:50.790532] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:11.747 [2024-07-24 22:06:50.790610] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:11.747 TLSTESTn1 00:19:11.747 22:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:12.006 Running I/O for 10 seconds... 00:19:21.985 00:19:21.985 Latency(us) 00:19:21.985 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.985 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:21.985 Verification LBA range: start 0x0 length 0x2000 00:19:21.985 TLSTESTn1 : 10.03 5063.25 19.78 0.00 0.00 25230.30 6710.89 60817.41 00:19:21.985 =================================================================================================================== 00:19:21.985 Total : 5063.25 19.78 0.00 0.00 25230.30 6710.89 60817.41 00:19:21.985 0 00:19:21.986 22:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:21.986 22:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 2714247 00:19:21.986 22:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2714247 ']' 00:19:21.986 22:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2714247 00:19:21.986 22:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:21.986 22:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:21.986 22:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2714247 00:19:21.986 22:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:21.986 22:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:21.986 22:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2714247' 00:19:21.986 killing process with pid 2714247 00:19:21.986 22:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2714247 00:19:21.986 Received shutdown signal, test time was about 10.000000 seconds 00:19:21.986 00:19:21.986 Latency(us) 00:19:21.986 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.986 =================================================================================================================== 00:19:21.986 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:21.986 [2024-07-24 22:07:01.100680] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:21.986 22:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2714247 00:19:22.245 22:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.K5LIWACxIb 00:19:22.245 22:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:22.245 22:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.K5LIWACxIb 00:19:22.245 22:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:22.245 22:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:22.245 22:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:22.245 22:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:22.245 22:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.K5LIWACxIb 00:19:22.245 22:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:22.245 22:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:22.245 22:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:22.245 22:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.K5LIWACxIb' 00:19:22.245 22:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:22.245 22:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2716118 00:19:22.245 22:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:22.245 22:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:22.245 22:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2716118 /var/tmp/bdevperf.sock 00:19:22.245 22:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2716118 ']' 00:19:22.245 22:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:22.245 22:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:22.245 22:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:22.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:22.245 22:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:22.245 22:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.245 [2024-07-24 22:07:01.333129] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:19:22.245 [2024-07-24 22:07:01.333182] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2716118 ] 00:19:22.245 EAL: No free 2048 kB hugepages reported on node 1 00:19:22.245 [2024-07-24 22:07:01.400632] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.504 [2024-07-24 22:07:01.470348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:23.072 22:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:23.072 22:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:23.072 22:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.K5LIWACxIb 00:19:23.072 [2024-07-24 22:07:02.284874] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:23.072 [2024-07-24 22:07:02.284964] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:23.331 [2024-07-24 22:07:02.295847] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:23.331 [2024-07-24 22:07:02.296331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x64f5e0 (107): Transport endpoint is not connected 00:19:23.331 [2024-07-24 22:07:02.297321] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x64f5e0 (9): Bad file descriptor 00:19:23.331 [2024-07-24 22:07:02.298323] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:23.331 [2024-07-24 22:07:02.298335] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:23.331 [2024-07-24 22:07:02.298346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:23.331 request: 00:19:23.331 { 00:19:23.331 "name": "TLSTEST", 00:19:23.331 "trtype": "tcp", 00:19:23.331 "traddr": "10.0.0.2", 00:19:23.331 "adrfam": "ipv4", 00:19:23.331 "trsvcid": "4420", 00:19:23.331 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.331 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:23.331 "prchk_reftag": false, 00:19:23.331 "prchk_guard": false, 00:19:23.331 "hdgst": false, 00:19:23.331 "ddgst": false, 00:19:23.331 "psk": "/tmp/tmp.K5LIWACxIb", 00:19:23.331 "method": "bdev_nvme_attach_controller", 00:19:23.331 "req_id": 1 00:19:23.331 } 00:19:23.331 Got JSON-RPC error response 00:19:23.331 response: 00:19:23.331 { 00:19:23.331 "code": -5, 00:19:23.331 "message": "Input/output error" 00:19:23.331 } 00:19:23.331 22:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2716118 00:19:23.331 22:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2716118 ']' 00:19:23.331 22:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2716118 00:19:23.331 22:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:23.331 22:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:23.331 22:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2716118 00:19:23.331 22:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:23.331 22:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:23.331 22:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2716118' 00:19:23.331 killing process with pid 2716118 00:19:23.331 22:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2716118 00:19:23.331 Received shutdown signal, test time was about 10.000000 seconds 00:19:23.331 00:19:23.331 Latency(us) 00:19:23.331 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.331 =================================================================================================================== 00:19:23.331 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:23.331 [2024-07-24 22:07:02.375337] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:23.331 22:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2716118 00:19:23.331 22:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:23.591 22:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:23.591 22:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:23.591 22:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:23.591 22:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:23.591 22:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.iGB6Arf8r8 00:19:23.591 22:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:23.591 22:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.iGB6Arf8r8 00:19:23.591 22:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:23.591 22:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:23.591 22:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:23.591 22:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:23.591 22:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.iGB6Arf8r8 00:19:23.591 22:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:23.591 22:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:23.591 22:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:23.591 22:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.iGB6Arf8r8' 00:19:23.591 22:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:23.591 22:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2716366 00:19:23.591 22:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:23.591 22:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:23.591 22:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2716366 /var/tmp/bdevperf.sock 00:19:23.591 22:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2716366 ']' 00:19:23.591 22:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:23.591 22:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:23.591 22:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:23.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:23.591 22:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:23.591 22:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:23.591 [2024-07-24 22:07:02.599126] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:19:23.591 [2024-07-24 22:07:02.599179] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2716366 ] 00:19:23.591 EAL: No free 2048 kB hugepages reported on node 1 00:19:23.591 [2024-07-24 22:07:02.666629] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.591 [2024-07-24 22:07:02.741769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:24.529 22:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:24.529 22:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:24.529 22:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.iGB6Arf8r8 00:19:24.529 [2024-07-24 22:07:03.544828] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:24.529 [2024-07-24 22:07:03.544919] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:24.529 [2024-07-24 22:07:03.549514] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:24.529 [2024-07-24 22:07:03.549538] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:24.529 [2024-07-24 22:07:03.549564] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:24.529 [2024-07-24 22:07:03.550225] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b15e0 (107): Transport endpoint is not connected 00:19:24.529 [2024-07-24 22:07:03.551217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b15e0 (9): Bad file descriptor 00:19:24.529 [2024-07-24 22:07:03.552218] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:24.529 [2024-07-24 22:07:03.552229] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:24.529 [2024-07-24 22:07:03.552243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:24.529 request: 00:19:24.529 { 00:19:24.529 "name": "TLSTEST", 00:19:24.529 "trtype": "tcp", 00:19:24.529 "traddr": "10.0.0.2", 00:19:24.529 "adrfam": "ipv4", 00:19:24.529 "trsvcid": "4420", 00:19:24.529 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:24.529 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:24.529 "prchk_reftag": false, 00:19:24.529 "prchk_guard": false, 00:19:24.529 "hdgst": false, 00:19:24.529 "ddgst": false, 00:19:24.529 "psk": "/tmp/tmp.iGB6Arf8r8", 00:19:24.529 "method": "bdev_nvme_attach_controller", 00:19:24.529 "req_id": 1 00:19:24.529 } 00:19:24.529 Got JSON-RPC error response 00:19:24.529 response: 00:19:24.529 { 00:19:24.529 "code": -5, 00:19:24.529 "message": "Input/output error" 00:19:24.529 } 00:19:24.529 22:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2716366 00:19:24.529 22:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2716366 ']' 00:19:24.529 22:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2716366 00:19:24.529 22:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:24.529 22:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:24.529 22:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2716366 00:19:24.529 22:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:24.529 22:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:24.530 22:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2716366' 00:19:24.530 killing process with pid 2716366 00:19:24.530 22:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2716366 00:19:24.530 Received shutdown signal, test time was about 10.000000 seconds 00:19:24.530 00:19:24.530 Latency(us) 00:19:24.530 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.530 =================================================================================================================== 00:19:24.530 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:24.530 [2024-07-24 22:07:03.621558] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:24.530 22:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2716366 00:19:24.790 22:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:24.790 22:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:24.790 22:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:24.790 22:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:24.790 22:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:24.790 22:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.iGB6Arf8r8 00:19:24.790 22:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:24.790 22:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.iGB6Arf8r8 00:19:24.790 22:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:24.790 22:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:24.790 22:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:24.790 22:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:24.790 22:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.iGB6Arf8r8 00:19:24.790 22:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:24.790 22:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:24.790 22:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:24.790 22:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.iGB6Arf8r8' 00:19:24.790 22:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:24.790 22:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2716637 00:19:24.790 22:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:24.790 22:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:24.790 22:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2716637 /var/tmp/bdevperf.sock 00:19:24.790 22:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2716637 ']' 00:19:24.790 22:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:24.790 22:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:24.790 22:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:24.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:24.790 22:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:24.790 22:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.790 [2024-07-24 22:07:03.842939] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:19:24.790 [2024-07-24 22:07:03.842996] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2716637 ] 00:19:24.790 EAL: No free 2048 kB hugepages reported on node 1 00:19:24.790 [2024-07-24 22:07:03.908787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.790 [2024-07-24 22:07:03.983706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:25.726 22:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:25.726 22:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:25.726 22:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iGB6Arf8r8 00:19:25.726 [2024-07-24 22:07:04.801389] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:25.726 [2024-07-24 22:07:04.801480] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:25.726 [2024-07-24 22:07:04.812109] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:25.726 [2024-07-24 22:07:04.812131] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:25.726 [2024-07-24 22:07:04.812173] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:25.726 [2024-07-24 22:07:04.812832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd75e0 (107): Transport endpoint is not connected 00:19:25.726 [2024-07-24 22:07:04.813823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd75e0 (9): Bad file descriptor 00:19:25.726 [2024-07-24 22:07:04.814825] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:25.726 [2024-07-24 22:07:04.814836] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:25.726 [2024-07-24 22:07:04.814848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:25.726 request: 00:19:25.726 { 00:19:25.726 "name": "TLSTEST", 00:19:25.726 "trtype": "tcp", 00:19:25.726 "traddr": "10.0.0.2", 00:19:25.726 "adrfam": "ipv4", 00:19:25.726 "trsvcid": "4420", 00:19:25.726 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:25.726 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:25.726 "prchk_reftag": false, 00:19:25.726 "prchk_guard": false, 00:19:25.726 "hdgst": false, 00:19:25.726 "ddgst": false, 00:19:25.726 "psk": "/tmp/tmp.iGB6Arf8r8", 00:19:25.726 "method": "bdev_nvme_attach_controller", 00:19:25.726 "req_id": 1 00:19:25.726 } 00:19:25.726 Got JSON-RPC error response 00:19:25.726 response: 00:19:25.726 { 00:19:25.726 "code": -5, 00:19:25.726 "message": "Input/output error" 00:19:25.726 } 00:19:25.726 22:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2716637 00:19:25.726 22:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2716637 ']' 00:19:25.726 22:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2716637 00:19:25.726 22:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:25.726 22:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:25.726 22:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2716637 00:19:25.726 22:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:25.726 22:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:25.726 22:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2716637' 00:19:25.726 killing process with pid 2716637 00:19:25.726 22:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2716637 00:19:25.726 Received shutdown signal, test time was about 10.000000 seconds 00:19:25.726 00:19:25.726 Latency(us) 00:19:25.726 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.727 =================================================================================================================== 00:19:25.727 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:25.727 [2024-07-24 22:07:04.885899] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:25.727 22:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2716637 00:19:25.986 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:25.986 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:25.986 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:25.986 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:25.986 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:25.986 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:25.986 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:25.986 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:25.986 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:25.986 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:25.986 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:25.986 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:25.986 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:25.986 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:25.986 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:25.986 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:25.986 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:25.986 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:25.986 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2716901 00:19:25.986 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:25.986 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:25.986 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2716901 /var/tmp/bdevperf.sock 00:19:25.986 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2716901 ']' 00:19:25.986 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:25.986 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:25.986 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:25.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:25.986 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:25.986 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:25.986 [2024-07-24 22:07:05.107624] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:19:25.986 [2024-07-24 22:07:05.107677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2716901 ] 00:19:25.986 EAL: No free 2048 kB hugepages reported on node 1 00:19:25.986 [2024-07-24 22:07:05.172986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.245 [2024-07-24 22:07:05.240913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:26.812 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:26.812 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:26.812 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:27.071 [2024-07-24 22:07:06.045226] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:27.071 [2024-07-24 22:07:06.047008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a3b50 (9): Bad file descriptor 00:19:27.071 [2024-07-24 22:07:06.048007] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:27.071 [2024-07-24 22:07:06.048019] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:27.071 [2024-07-24 22:07:06.048031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:27.071 request: 00:19:27.071 { 00:19:27.071 "name": "TLSTEST", 00:19:27.071 "trtype": "tcp", 00:19:27.071 "traddr": "10.0.0.2", 00:19:27.071 "adrfam": "ipv4", 00:19:27.071 "trsvcid": "4420", 00:19:27.071 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:27.071 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:27.071 "prchk_reftag": false, 00:19:27.071 "prchk_guard": false, 00:19:27.071 "hdgst": false, 00:19:27.071 "ddgst": false, 00:19:27.071 "method": "bdev_nvme_attach_controller", 00:19:27.071 "req_id": 1 00:19:27.071 } 00:19:27.071 Got JSON-RPC error response 00:19:27.071 response: 00:19:27.071 { 00:19:27.071 "code": -5, 00:19:27.071 "message": "Input/output error" 00:19:27.071 } 00:19:27.071 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2716901 00:19:27.071 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2716901 ']' 00:19:27.071 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2716901 00:19:27.071 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:27.071 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:27.071 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2716901 00:19:27.071 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:27.071 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:27.071 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2716901' 00:19:27.071 killing process with pid 2716901 00:19:27.071 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2716901 00:19:27.071 Received shutdown signal, test time was about 10.000000 seconds 00:19:27.071 00:19:27.071 Latency(us) 00:19:27.071 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.071 =================================================================================================================== 00:19:27.071 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:27.071 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2716901 00:19:27.071 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:27.331 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:27.331 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:27.331 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:27.331 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:27.331 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 2711817 00:19:27.331 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2711817 ']' 00:19:27.331 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2711817 00:19:27.331 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:27.331 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:27.331 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2711817 00:19:27.331 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:27.331 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:27.331 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2711817' 00:19:27.332 killing process with pid 2711817 00:19:27.332 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2711817 00:19:27.332 [2024-07-24 22:07:06.343348] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:27.332 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2711817 00:19:27.332 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:27.332 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:27.332 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:27.332 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:27.332 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:27.332 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:19:27.332 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:27.591 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:27.591 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:19:27.591 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.2gPA53PIuq 00:19:27.591 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:27.591 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.2gPA53PIuq 00:19:27.591 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:19:27.591 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:27.591 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:27.591 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:27.591 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2717188 00:19:27.591 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:27.591 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2717188 00:19:27.591 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2717188 ']' 00:19:27.591 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:27.591 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:27.591 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:27.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:27.591 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:27.591 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:27.591 [2024-07-24 22:07:06.648441] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:19:27.591 [2024-07-24 22:07:06.648492] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:27.591 EAL: No free 2048 kB hugepages reported on node 1 00:19:27.591 [2024-07-24 22:07:06.722192] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.591 [2024-07-24 22:07:06.793670] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:27.591 [2024-07-24 22:07:06.793706] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:27.591 [2024-07-24 22:07:06.793720] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:27.591 [2024-07-24 22:07:06.793744] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:27.591 [2024-07-24 22:07:06.793751] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:27.591 [2024-07-24 22:07:06.793773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:28.576 22:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:28.576 22:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:28.576 22:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:28.576 22:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:28.576 22:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.576 22:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:28.576 22:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.2gPA53PIuq 00:19:28.576 22:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.2gPA53PIuq 00:19:28.576 22:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:28.576 [2024-07-24 22:07:07.644773] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:28.576 22:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:28.835 22:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:28.835 [2024-07-24 22:07:07.989650] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:28.835 [2024-07-24 22:07:07.989860] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:28.835 22:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:29.094 malloc0 00:19:29.094 22:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:29.353 22:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2gPA53PIuq 00:19:29.353 [2024-07-24 22:07:08.471028] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:29.353 22:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2gPA53PIuq 00:19:29.353 22:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:29.353 22:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:29.353 22:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:29.353 22:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.2gPA53PIuq' 00:19:29.353 22:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:29.353 22:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:29.353 22:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2717484 00:19:29.353 22:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:29.353 22:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2717484 /var/tmp/bdevperf.sock 00:19:29.353 22:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2717484 ']' 00:19:29.353 22:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:29.353 22:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:29.353 22:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:29.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:29.353 22:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:29.353 22:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.353 [2024-07-24 22:07:08.515089] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:19:29.353 [2024-07-24 22:07:08.515138] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2717484 ] 00:19:29.353 EAL: No free 2048 kB hugepages reported on node 1 00:19:29.611 [2024-07-24 22:07:08.579905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.611 [2024-07-24 22:07:08.647617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:30.178 22:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:30.178 22:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:30.178 22:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2gPA53PIuq 00:19:30.436 [2024-07-24 22:07:09.478422] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:30.436 [2024-07-24 22:07:09.478515] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:30.436 TLSTESTn1 00:19:30.436 22:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:30.695 Running I/O for 10 seconds... 00:19:40.675 00:19:40.675 Latency(us) 00:19:40.675 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.675 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:40.675 Verification LBA range: start 0x0 length 0x2000 00:19:40.675 TLSTESTn1 : 10.02 5068.72 19.80 0.00 0.00 25206.75 6684.67 55364.81 00:19:40.675 =================================================================================================================== 00:19:40.675 Total : 5068.72 19.80 0.00 0.00 25206.75 6684.67 55364.81 00:19:40.675 0 00:19:40.675 22:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:40.675 22:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 2717484 00:19:40.675 22:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2717484 ']' 00:19:40.675 22:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2717484 00:19:40.675 22:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:40.675 22:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:40.675 22:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2717484 00:19:40.675 22:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:40.675 22:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:40.675 22:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2717484' 00:19:40.675 killing process with pid 2717484 00:19:40.675 22:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2717484 00:19:40.675 Received shutdown signal, test time was about 10.000000 seconds 00:19:40.675 00:19:40.675 Latency(us) 00:19:40.675 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.675 =================================================================================================================== 00:19:40.675 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:40.675 [2024-07-24 22:07:19.804055] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:40.675 22:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2717484 00:19:40.934 22:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.2gPA53PIuq 00:19:40.934 22:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2gPA53PIuq 00:19:40.934 22:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:40.934 22:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2gPA53PIuq 00:19:40.934 22:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:40.934 22:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:40.934 22:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:40.934 22:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:40.934 22:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2gPA53PIuq 00:19:40.934 22:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:40.934 22:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:40.934 22:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:40.934 22:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.2gPA53PIuq' 00:19:40.934 22:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:40.934 22:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2719334 00:19:40.934 22:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:40.934 22:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:40.934 22:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2719334 /var/tmp/bdevperf.sock 00:19:40.934 22:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2719334 ']' 00:19:40.934 22:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:40.934 22:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:40.934 22:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:40.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:40.934 22:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:40.934 22:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:40.934 [2024-07-24 22:07:20.030561] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:19:40.935 [2024-07-24 22:07:20.030675] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2719334 ] 00:19:40.935 EAL: No free 2048 kB hugepages reported on node 1 00:19:40.935 [2024-07-24 22:07:20.097361] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.194 [2024-07-24 22:07:20.167625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:41.762 22:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:41.762 22:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:41.762 22:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2gPA53PIuq 00:19:42.022 [2024-07-24 22:07:20.997930] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:42.022 [2024-07-24 22:07:20.997982] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:42.022 [2024-07-24 22:07:20.997991] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.2gPA53PIuq 00:19:42.022 request: 00:19:42.022 { 00:19:42.022 "name": "TLSTEST", 00:19:42.022 "trtype": "tcp", 00:19:42.022 "traddr": "10.0.0.2", 00:19:42.022 "adrfam": "ipv4", 00:19:42.022 "trsvcid": "4420", 00:19:42.022 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:42.022 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:42.022 "prchk_reftag": false, 00:19:42.022 "prchk_guard": false, 00:19:42.022 "hdgst": false, 00:19:42.022 "ddgst": false, 00:19:42.022 "psk": "/tmp/tmp.2gPA53PIuq", 00:19:42.022 "method": "bdev_nvme_attach_controller", 00:19:42.022 "req_id": 1 00:19:42.022 } 00:19:42.022 Got JSON-RPC error response 00:19:42.022 response: 00:19:42.022 { 00:19:42.022 "code": -1, 00:19:42.022 "message": "Operation not permitted" 00:19:42.022 } 00:19:42.022 22:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2719334 00:19:42.022 22:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2719334 ']' 00:19:42.022 22:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2719334 00:19:42.022 22:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:42.022 22:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:42.022 22:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2719334 00:19:42.022 22:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:42.022 22:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:42.022 22:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2719334' 00:19:42.022 killing process with pid 2719334 00:19:42.022 22:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2719334 00:19:42.022 Received shutdown signal, test time was about 10.000000 seconds 00:19:42.022 00:19:42.022 Latency(us) 00:19:42.022 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.022 =================================================================================================================== 00:19:42.022 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:42.022 22:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2719334 00:19:42.022 22:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:42.022 22:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:42.022 22:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:42.282 22:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:42.282 22:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:42.282 22:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 2717188 00:19:42.282 22:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2717188 ']' 00:19:42.282 22:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2717188 00:19:42.282 22:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:42.282 22:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:42.282 22:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2717188 00:19:42.282 22:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:42.282 22:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:42.282 22:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2717188' 00:19:42.282 killing process with pid 2717188 00:19:42.282 22:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2717188 00:19:42.282 [2024-07-24 22:07:21.293550] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:42.282 22:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2717188 00:19:42.282 22:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:19:42.282 22:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:42.282 22:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:42.282 22:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.282 22:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2719609 00:19:42.282 22:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:42.282 22:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2719609 00:19:42.282 22:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2719609 ']' 00:19:42.282 22:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.282 22:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:42.282 22:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:42.282 22:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:42.282 22:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.542 [2024-07-24 22:07:21.536375] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:19:42.542 [2024-07-24 22:07:21.536424] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:42.542 EAL: No free 2048 kB hugepages reported on node 1 00:19:42.542 [2024-07-24 22:07:21.609194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.542 [2024-07-24 22:07:21.679615] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:42.542 [2024-07-24 22:07:21.679657] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:42.542 [2024-07-24 22:07:21.679665] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:42.542 [2024-07-24 22:07:21.679674] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:42.542 [2024-07-24 22:07:21.679696] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:42.542 [2024-07-24 22:07:21.679728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:43.110 22:07:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:43.110 22:07:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:43.110 22:07:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:43.110 22:07:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:43.110 22:07:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:43.370 22:07:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:43.370 22:07:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.2gPA53PIuq 00:19:43.370 22:07:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:43.370 22:07:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.2gPA53PIuq 00:19:43.370 22:07:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:19:43.370 22:07:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:43.370 22:07:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:19:43.370 22:07:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:43.370 22:07:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.2gPA53PIuq 00:19:43.370 22:07:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.2gPA53PIuq 00:19:43.370 22:07:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:43.370 [2024-07-24 22:07:22.521847] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:43.370 22:07:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:43.650 22:07:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:43.650 [2024-07-24 22:07:22.854734] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:43.650 [2024-07-24 22:07:22.854917] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:43.910 22:07:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:43.910 malloc0 00:19:43.910 22:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:44.169 22:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2gPA53PIuq 00:19:44.169 [2024-07-24 22:07:23.356184] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:44.169 [2024-07-24 22:07:23.356208] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:19:44.169 [2024-07-24 22:07:23.356232] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:44.169 request: 00:19:44.169 { 00:19:44.169 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:44.169 "host": "nqn.2016-06.io.spdk:host1", 00:19:44.169 "psk": "/tmp/tmp.2gPA53PIuq", 00:19:44.169 "method": "nvmf_subsystem_add_host", 00:19:44.169 "req_id": 1 00:19:44.169 } 00:19:44.169 Got JSON-RPC error response 00:19:44.169 response: 00:19:44.169 { 00:19:44.169 "code": -32603, 00:19:44.169 "message": "Internal error" 00:19:44.169 } 00:19:44.169 22:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:44.169 22:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:44.169 22:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:44.169 22:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:44.169 22:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 2719609 00:19:44.169 22:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2719609 ']' 00:19:44.169 22:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2719609 00:19:44.169 22:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:44.428 22:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:44.428 22:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2719609 00:19:44.428 22:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:44.428 22:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:44.428 22:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2719609' 00:19:44.428 killing process with pid 2719609 00:19:44.428 22:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2719609 00:19:44.428 22:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2719609 00:19:44.428 22:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.2gPA53PIuq 00:19:44.428 22:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:19:44.428 22:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:44.428 22:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:44.428 22:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:44.428 22:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2720117 00:19:44.428 22:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:44.428 22:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2720117 00:19:44.428 22:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2720117 ']' 00:19:44.428 22:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.428 22:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:44.428 22:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.428 22:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:44.428 22:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:44.686 [2024-07-24 22:07:23.672062] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:19:44.687 [2024-07-24 22:07:23.672118] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:44.687 EAL: No free 2048 kB hugepages reported on node 1 00:19:44.687 [2024-07-24 22:07:23.747358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.687 [2024-07-24 22:07:23.816663] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:44.687 [2024-07-24 22:07:23.816706] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:44.687 [2024-07-24 22:07:23.816720] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:44.687 [2024-07-24 22:07:23.816744] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:44.687 [2024-07-24 22:07:23.816751] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:44.687 [2024-07-24 22:07:23.816774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:45.255 22:07:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:45.255 22:07:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:45.255 22:07:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:45.255 22:07:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:45.255 22:07:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:45.515 22:07:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:45.515 22:07:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.2gPA53PIuq 00:19:45.515 22:07:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.2gPA53PIuq 00:19:45.515 22:07:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:45.515 [2024-07-24 22:07:24.646113] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:45.515 22:07:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:45.774 22:07:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:46.032 [2024-07-24 22:07:24.995011] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:46.032 [2024-07-24 22:07:24.995211] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:46.032 22:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:46.032 malloc0 00:19:46.032 22:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:46.292 22:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2gPA53PIuq 00:19:46.551 [2024-07-24 22:07:25.516638] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:46.551 22:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:46.551 22:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=2720450 00:19:46.551 22:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:46.551 22:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 2720450 /var/tmp/bdevperf.sock 00:19:46.551 22:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2720450 ']' 00:19:46.551 22:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:46.551 22:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:46.551 22:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:46.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:46.551 22:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:46.551 22:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.551 [2024-07-24 22:07:25.568193] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:19:46.551 [2024-07-24 22:07:25.568244] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2720450 ] 00:19:46.551 EAL: No free 2048 kB hugepages reported on node 1 00:19:46.551 [2024-07-24 22:07:25.633906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.551 [2024-07-24 22:07:25.702472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:47.487 22:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:47.487 22:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:47.487 22:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2gPA53PIuq 00:19:47.487 [2024-07-24 22:07:26.531942] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:47.487 [2024-07-24 22:07:26.532017] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:47.487 TLSTESTn1 00:19:47.487 22:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:47.747 22:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:19:47.747 "subsystems": [ 00:19:47.747 { 00:19:47.747 "subsystem": "keyring", 00:19:47.747 "config": [] 00:19:47.747 }, 00:19:47.747 { 00:19:47.747 "subsystem": "iobuf", 00:19:47.747 "config": [ 00:19:47.747 { 00:19:47.747 "method": "iobuf_set_options", 00:19:47.747 "params": { 00:19:47.747 "small_pool_count": 8192, 00:19:47.747 "large_pool_count": 1024, 00:19:47.747 "small_bufsize": 8192, 00:19:47.747 "large_bufsize": 135168 00:19:47.747 } 00:19:47.747 } 00:19:47.747 ] 00:19:47.747 }, 00:19:47.747 { 00:19:47.747 "subsystem": "sock", 00:19:47.747 "config": [ 00:19:47.747 { 00:19:47.747 "method": "sock_set_default_impl", 00:19:47.747 "params": { 00:19:47.747 "impl_name": "posix" 00:19:47.747 } 00:19:47.747 }, 00:19:47.747 { 00:19:47.747 "method": "sock_impl_set_options", 00:19:47.747 "params": { 00:19:47.747 "impl_name": "ssl", 00:19:47.747 "recv_buf_size": 4096, 00:19:47.747 "send_buf_size": 4096, 00:19:47.747 "enable_recv_pipe": true, 00:19:47.747 "enable_quickack": false, 00:19:47.747 "enable_placement_id": 0, 00:19:47.747 "enable_zerocopy_send_server": true, 00:19:47.747 "enable_zerocopy_send_client": false, 00:19:47.747 "zerocopy_threshold": 0, 00:19:47.747 "tls_version": 0, 00:19:47.747 "enable_ktls": false 00:19:47.747 } 00:19:47.747 }, 00:19:47.747 { 00:19:47.747 "method": "sock_impl_set_options", 00:19:47.747 "params": { 00:19:47.747 "impl_name": "posix", 00:19:47.747 "recv_buf_size": 2097152, 00:19:47.747 "send_buf_size": 2097152, 00:19:47.747 "enable_recv_pipe": true, 00:19:47.747 "enable_quickack": false, 00:19:47.747 "enable_placement_id": 0, 00:19:47.747 "enable_zerocopy_send_server": true, 00:19:47.747 "enable_zerocopy_send_client": false, 00:19:47.747 "zerocopy_threshold": 0, 00:19:47.747 "tls_version": 0, 00:19:47.747 "enable_ktls": false 00:19:47.747 } 00:19:47.747 } 00:19:47.747 ] 00:19:47.747 }, 00:19:47.747 { 00:19:47.747 "subsystem": "vmd", 00:19:47.747 "config": [] 00:19:47.747 }, 00:19:47.747 { 00:19:47.747 "subsystem": "accel", 00:19:47.747 "config": [ 00:19:47.747 { 00:19:47.747 "method": "accel_set_options", 00:19:47.747 "params": { 00:19:47.747 "small_cache_size": 128, 00:19:47.747 "large_cache_size": 16, 00:19:47.747 "task_count": 2048, 00:19:47.747 "sequence_count": 2048, 00:19:47.747 "buf_count": 2048 00:19:47.747 } 00:19:47.747 } 00:19:47.747 ] 00:19:47.747 }, 00:19:47.747 { 00:19:47.747 "subsystem": "bdev", 00:19:47.747 "config": [ 00:19:47.747 { 00:19:47.747 "method": "bdev_set_options", 00:19:47.747 "params": { 00:19:47.747 "bdev_io_pool_size": 65535, 00:19:47.747 "bdev_io_cache_size": 256, 00:19:47.747 "bdev_auto_examine": true, 00:19:47.747 "iobuf_small_cache_size": 128, 00:19:47.747 "iobuf_large_cache_size": 16 00:19:47.747 } 00:19:47.747 }, 00:19:47.747 { 00:19:47.747 "method": "bdev_raid_set_options", 00:19:47.747 "params": { 00:19:47.747 "process_window_size_kb": 1024, 00:19:47.747 "process_max_bandwidth_mb_sec": 0 00:19:47.747 } 00:19:47.747 }, 00:19:47.747 { 00:19:47.747 "method": "bdev_iscsi_set_options", 00:19:47.747 "params": { 00:19:47.747 "timeout_sec": 30 00:19:47.747 } 00:19:47.747 }, 00:19:47.747 { 00:19:47.747 "method": "bdev_nvme_set_options", 00:19:47.747 "params": { 00:19:47.747 "action_on_timeout": "none", 00:19:47.747 "timeout_us": 0, 00:19:47.747 "timeout_admin_us": 0, 00:19:47.747 "keep_alive_timeout_ms": 10000, 00:19:47.747 "arbitration_burst": 0, 00:19:47.747 "low_priority_weight": 0, 00:19:47.747 "medium_priority_weight": 0, 00:19:47.747 "high_priority_weight": 0, 00:19:47.747 "nvme_adminq_poll_period_us": 10000, 00:19:47.747 "nvme_ioq_poll_period_us": 0, 00:19:47.747 "io_queue_requests": 0, 00:19:47.747 "delay_cmd_submit": true, 00:19:47.747 "transport_retry_count": 4, 00:19:47.747 "bdev_retry_count": 3, 00:19:47.747 "transport_ack_timeout": 0, 00:19:47.747 "ctrlr_loss_timeout_sec": 0, 00:19:47.747 "reconnect_delay_sec": 0, 00:19:47.747 "fast_io_fail_timeout_sec": 0, 00:19:47.747 "disable_auto_failback": false, 00:19:47.747 "generate_uuids": false, 00:19:47.747 "transport_tos": 0, 00:19:47.747 "nvme_error_stat": false, 00:19:47.747 "rdma_srq_size": 0, 00:19:47.747 "io_path_stat": false, 00:19:47.747 "allow_accel_sequence": false, 00:19:47.747 "rdma_max_cq_size": 0, 00:19:47.747 "rdma_cm_event_timeout_ms": 0, 00:19:47.747 "dhchap_digests": [ 00:19:47.747 "sha256", 00:19:47.747 "sha384", 00:19:47.747 "sha512" 00:19:47.747 ], 00:19:47.747 "dhchap_dhgroups": [ 00:19:47.747 "null", 00:19:47.747 "ffdhe2048", 00:19:47.747 "ffdhe3072", 00:19:47.747 "ffdhe4096", 00:19:47.748 "ffdhe6144", 00:19:47.748 "ffdhe8192" 00:19:47.748 ] 00:19:47.748 } 00:19:47.748 }, 00:19:47.748 { 00:19:47.748 "method": "bdev_nvme_set_hotplug", 00:19:47.748 "params": { 00:19:47.748 "period_us": 100000, 00:19:47.748 "enable": false 00:19:47.748 } 00:19:47.748 }, 00:19:47.748 { 00:19:47.748 "method": "bdev_malloc_create", 00:19:47.748 "params": { 00:19:47.748 "name": "malloc0", 00:19:47.748 "num_blocks": 8192, 00:19:47.748 "block_size": 4096, 00:19:47.748 "physical_block_size": 4096, 00:19:47.748 "uuid": "7624f3cf-2837-406a-85c2-d2aafdab617c", 00:19:47.748 "optimal_io_boundary": 0, 00:19:47.748 "md_size": 0, 00:19:47.748 "dif_type": 0, 00:19:47.748 "dif_is_head_of_md": false, 00:19:47.748 "dif_pi_format": 0 00:19:47.748 } 00:19:47.748 }, 00:19:47.748 { 00:19:47.748 "method": "bdev_wait_for_examine" 00:19:47.748 } 00:19:47.748 ] 00:19:47.748 }, 00:19:47.748 { 00:19:47.748 "subsystem": "nbd", 00:19:47.748 "config": [] 00:19:47.748 }, 00:19:47.748 { 00:19:47.748 "subsystem": "scheduler", 00:19:47.748 "config": [ 00:19:47.748 { 00:19:47.748 "method": "framework_set_scheduler", 00:19:47.748 "params": { 00:19:47.748 "name": "static" 00:19:47.748 } 00:19:47.748 } 00:19:47.748 ] 00:19:47.748 }, 00:19:47.748 { 00:19:47.748 "subsystem": "nvmf", 00:19:47.748 "config": [ 00:19:47.748 { 00:19:47.748 "method": "nvmf_set_config", 00:19:47.748 "params": { 00:19:47.748 "discovery_filter": "match_any", 00:19:47.748 "admin_cmd_passthru": { 00:19:47.748 "identify_ctrlr": false 00:19:47.748 } 00:19:47.748 } 00:19:47.748 }, 00:19:47.748 { 00:19:47.748 "method": "nvmf_set_max_subsystems", 00:19:47.748 "params": { 00:19:47.748 "max_subsystems": 1024 00:19:47.748 } 00:19:47.748 }, 00:19:47.748 { 00:19:47.748 "method": "nvmf_set_crdt", 00:19:47.748 "params": { 00:19:47.748 "crdt1": 0, 00:19:47.748 "crdt2": 0, 00:19:47.748 "crdt3": 0 00:19:47.748 } 00:19:47.748 }, 00:19:47.748 { 00:19:47.748 "method": "nvmf_create_transport", 00:19:47.748 "params": { 00:19:47.748 "trtype": "TCP", 00:19:47.748 "max_queue_depth": 128, 00:19:47.748 "max_io_qpairs_per_ctrlr": 127, 00:19:47.748 "in_capsule_data_size": 4096, 00:19:47.748 "max_io_size": 131072, 00:19:47.748 "io_unit_size": 131072, 00:19:47.748 "max_aq_depth": 128, 00:19:47.748 "num_shared_buffers": 511, 00:19:47.748 "buf_cache_size": 4294967295, 00:19:47.748 "dif_insert_or_strip": false, 00:19:47.748 "zcopy": false, 00:19:47.748 "c2h_success": false, 00:19:47.748 "sock_priority": 0, 00:19:47.748 "abort_timeout_sec": 1, 00:19:47.748 "ack_timeout": 0, 00:19:47.748 "data_wr_pool_size": 0 00:19:47.748 } 00:19:47.748 }, 00:19:47.748 { 00:19:47.748 "method": "nvmf_create_subsystem", 00:19:47.748 "params": { 00:19:47.748 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:47.748 "allow_any_host": false, 00:19:47.748 "serial_number": "SPDK00000000000001", 00:19:47.748 "model_number": "SPDK bdev Controller", 00:19:47.748 "max_namespaces": 10, 00:19:47.748 "min_cntlid": 1, 00:19:47.748 "max_cntlid": 65519, 00:19:47.748 "ana_reporting": false 00:19:47.748 } 00:19:47.748 }, 00:19:47.748 { 00:19:47.748 "method": "nvmf_subsystem_add_host", 00:19:47.748 "params": { 00:19:47.748 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:47.748 "host": "nqn.2016-06.io.spdk:host1", 00:19:47.748 "psk": "/tmp/tmp.2gPA53PIuq" 00:19:47.748 } 00:19:47.748 }, 00:19:47.748 { 00:19:47.748 "method": "nvmf_subsystem_add_ns", 00:19:47.748 "params": { 00:19:47.748 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:47.748 "namespace": { 00:19:47.748 "nsid": 1, 00:19:47.748 "bdev_name": "malloc0", 00:19:47.748 "nguid": "7624F3CF2837406A85C2D2AAFDAB617C", 00:19:47.748 "uuid": "7624f3cf-2837-406a-85c2-d2aafdab617c", 00:19:47.748 "no_auto_visible": false 00:19:47.748 } 00:19:47.748 } 00:19:47.748 }, 00:19:47.748 { 00:19:47.748 "method": "nvmf_subsystem_add_listener", 00:19:47.748 "params": { 00:19:47.748 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:47.748 "listen_address": { 00:19:47.748 "trtype": "TCP", 00:19:47.748 "adrfam": "IPv4", 00:19:47.748 "traddr": "10.0.0.2", 00:19:47.748 "trsvcid": "4420" 00:19:47.748 }, 00:19:47.748 "secure_channel": true 00:19:47.748 } 00:19:47.748 } 00:19:47.748 ] 00:19:47.748 } 00:19:47.748 ] 00:19:47.748 }' 00:19:47.748 22:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:48.008 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:19:48.008 "subsystems": [ 00:19:48.008 { 00:19:48.008 "subsystem": "keyring", 00:19:48.008 "config": [] 00:19:48.008 }, 00:19:48.008 { 00:19:48.008 "subsystem": "iobuf", 00:19:48.008 "config": [ 00:19:48.008 { 00:19:48.008 "method": "iobuf_set_options", 00:19:48.008 "params": { 00:19:48.008 "small_pool_count": 8192, 00:19:48.008 "large_pool_count": 1024, 00:19:48.008 "small_bufsize": 8192, 00:19:48.008 "large_bufsize": 135168 00:19:48.008 } 00:19:48.008 } 00:19:48.008 ] 00:19:48.008 }, 00:19:48.008 { 00:19:48.008 "subsystem": "sock", 00:19:48.008 "config": [ 00:19:48.008 { 00:19:48.008 "method": "sock_set_default_impl", 00:19:48.008 "params": { 00:19:48.008 "impl_name": "posix" 00:19:48.008 } 00:19:48.008 }, 00:19:48.008 { 00:19:48.008 "method": "sock_impl_set_options", 00:19:48.008 "params": { 00:19:48.008 "impl_name": "ssl", 00:19:48.008 "recv_buf_size": 4096, 00:19:48.008 "send_buf_size": 4096, 00:19:48.008 "enable_recv_pipe": true, 00:19:48.008 "enable_quickack": false, 00:19:48.008 "enable_placement_id": 0, 00:19:48.008 "enable_zerocopy_send_server": true, 00:19:48.008 "enable_zerocopy_send_client": false, 00:19:48.008 "zerocopy_threshold": 0, 00:19:48.008 "tls_version": 0, 00:19:48.008 "enable_ktls": false 00:19:48.008 } 00:19:48.008 }, 00:19:48.008 { 00:19:48.008 "method": "sock_impl_set_options", 00:19:48.008 "params": { 00:19:48.008 "impl_name": "posix", 00:19:48.008 "recv_buf_size": 2097152, 00:19:48.008 "send_buf_size": 2097152, 00:19:48.008 "enable_recv_pipe": true, 00:19:48.008 "enable_quickack": false, 00:19:48.008 "enable_placement_id": 0, 00:19:48.008 "enable_zerocopy_send_server": true, 00:19:48.008 "enable_zerocopy_send_client": false, 00:19:48.008 "zerocopy_threshold": 0, 00:19:48.008 "tls_version": 0, 00:19:48.008 "enable_ktls": false 00:19:48.008 } 00:19:48.008 } 00:19:48.008 ] 00:19:48.008 }, 00:19:48.008 { 00:19:48.008 "subsystem": "vmd", 00:19:48.008 "config": [] 00:19:48.008 }, 00:19:48.008 { 00:19:48.008 "subsystem": "accel", 00:19:48.008 "config": [ 00:19:48.008 { 00:19:48.008 "method": "accel_set_options", 00:19:48.008 "params": { 00:19:48.008 "small_cache_size": 128, 00:19:48.008 "large_cache_size": 16, 00:19:48.008 "task_count": 2048, 00:19:48.008 "sequence_count": 2048, 00:19:48.008 "buf_count": 2048 00:19:48.008 } 00:19:48.008 } 00:19:48.008 ] 00:19:48.008 }, 00:19:48.008 { 00:19:48.008 "subsystem": "bdev", 00:19:48.008 "config": [ 00:19:48.008 { 00:19:48.008 "method": "bdev_set_options", 00:19:48.008 "params": { 00:19:48.008 "bdev_io_pool_size": 65535, 00:19:48.008 "bdev_io_cache_size": 256, 00:19:48.008 "bdev_auto_examine": true, 00:19:48.008 "iobuf_small_cache_size": 128, 00:19:48.008 "iobuf_large_cache_size": 16 00:19:48.008 } 00:19:48.008 }, 00:19:48.008 { 00:19:48.008 "method": "bdev_raid_set_options", 00:19:48.008 "params": { 00:19:48.008 "process_window_size_kb": 1024, 00:19:48.008 "process_max_bandwidth_mb_sec": 0 00:19:48.008 } 00:19:48.009 }, 00:19:48.009 { 00:19:48.009 "method": "bdev_iscsi_set_options", 00:19:48.009 "params": { 00:19:48.009 "timeout_sec": 30 00:19:48.009 } 00:19:48.009 }, 00:19:48.009 { 00:19:48.009 "method": "bdev_nvme_set_options", 00:19:48.009 "params": { 00:19:48.009 "action_on_timeout": "none", 00:19:48.009 "timeout_us": 0, 00:19:48.009 "timeout_admin_us": 0, 00:19:48.009 "keep_alive_timeout_ms": 10000, 00:19:48.009 "arbitration_burst": 0, 00:19:48.009 "low_priority_weight": 0, 00:19:48.009 "medium_priority_weight": 0, 00:19:48.009 "high_priority_weight": 0, 00:19:48.009 "nvme_adminq_poll_period_us": 10000, 00:19:48.009 "nvme_ioq_poll_period_us": 0, 00:19:48.009 "io_queue_requests": 512, 00:19:48.009 "delay_cmd_submit": true, 00:19:48.009 "transport_retry_count": 4, 00:19:48.009 "bdev_retry_count": 3, 00:19:48.009 "transport_ack_timeout": 0, 00:19:48.009 "ctrlr_loss_timeout_sec": 0, 00:19:48.009 "reconnect_delay_sec": 0, 00:19:48.009 "fast_io_fail_timeout_sec": 0, 00:19:48.009 "disable_auto_failback": false, 00:19:48.009 "generate_uuids": false, 00:19:48.009 "transport_tos": 0, 00:19:48.009 "nvme_error_stat": false, 00:19:48.009 "rdma_srq_size": 0, 00:19:48.009 "io_path_stat": false, 00:19:48.009 "allow_accel_sequence": false, 00:19:48.009 "rdma_max_cq_size": 0, 00:19:48.009 "rdma_cm_event_timeout_ms": 0, 00:19:48.009 "dhchap_digests": [ 00:19:48.009 "sha256", 00:19:48.009 "sha384", 00:19:48.009 "sha512" 00:19:48.009 ], 00:19:48.009 "dhchap_dhgroups": [ 00:19:48.009 "null", 00:19:48.009 "ffdhe2048", 00:19:48.009 "ffdhe3072", 00:19:48.009 "ffdhe4096", 00:19:48.009 "ffdhe6144", 00:19:48.009 "ffdhe8192" 00:19:48.009 ] 00:19:48.009 } 00:19:48.009 }, 00:19:48.009 { 00:19:48.009 "method": "bdev_nvme_attach_controller", 00:19:48.009 "params": { 00:19:48.009 "name": "TLSTEST", 00:19:48.009 "trtype": "TCP", 00:19:48.009 "adrfam": "IPv4", 00:19:48.009 "traddr": "10.0.0.2", 00:19:48.009 "trsvcid": "4420", 00:19:48.009 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.009 "prchk_reftag": false, 00:19:48.009 "prchk_guard": false, 00:19:48.009 "ctrlr_loss_timeout_sec": 0, 00:19:48.009 "reconnect_delay_sec": 0, 00:19:48.009 "fast_io_fail_timeout_sec": 0, 00:19:48.009 "psk": "/tmp/tmp.2gPA53PIuq", 00:19:48.009 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:48.009 "hdgst": false, 00:19:48.009 "ddgst": false 00:19:48.009 } 00:19:48.009 }, 00:19:48.009 { 00:19:48.009 "method": "bdev_nvme_set_hotplug", 00:19:48.009 "params": { 00:19:48.009 "period_us": 100000, 00:19:48.009 "enable": false 00:19:48.009 } 00:19:48.009 }, 00:19:48.009 { 00:19:48.009 "method": "bdev_wait_for_examine" 00:19:48.009 } 00:19:48.009 ] 00:19:48.009 }, 00:19:48.009 { 00:19:48.009 "subsystem": "nbd", 00:19:48.009 "config": [] 00:19:48.009 } 00:19:48.009 ] 00:19:48.009 }' 00:19:48.009 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 2720450 00:19:48.009 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2720450 ']' 00:19:48.009 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2720450 00:19:48.009 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:48.009 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:48.009 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2720450 00:19:48.009 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:48.009 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:48.009 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2720450' 00:19:48.009 killing process with pid 2720450 00:19:48.009 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2720450 00:19:48.009 Received shutdown signal, test time was about 10.000000 seconds 00:19:48.009 00:19:48.009 Latency(us) 00:19:48.009 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.009 =================================================================================================================== 00:19:48.009 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:48.009 [2024-07-24 22:07:27.177308] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:48.009 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2720450 00:19:48.268 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 2720117 00:19:48.268 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2720117 ']' 00:19:48.268 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2720117 00:19:48.268 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:48.268 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:48.268 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2720117 00:19:48.268 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:48.268 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:48.268 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2720117' 00:19:48.268 killing process with pid 2720117 00:19:48.268 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2720117 00:19:48.268 [2024-07-24 22:07:27.404074] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:48.268 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2720117 00:19:48.528 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:48.528 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:48.528 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:48.528 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:19:48.528 "subsystems": [ 00:19:48.528 { 00:19:48.528 "subsystem": "keyring", 00:19:48.528 "config": [] 00:19:48.528 }, 00:19:48.528 { 00:19:48.528 "subsystem": "iobuf", 00:19:48.528 "config": [ 00:19:48.528 { 00:19:48.528 "method": "iobuf_set_options", 00:19:48.528 "params": { 00:19:48.528 "small_pool_count": 8192, 00:19:48.528 "large_pool_count": 1024, 00:19:48.528 "small_bufsize": 8192, 00:19:48.528 "large_bufsize": 135168 00:19:48.528 } 00:19:48.528 } 00:19:48.528 ] 00:19:48.528 }, 00:19:48.528 { 00:19:48.528 "subsystem": "sock", 00:19:48.528 "config": [ 00:19:48.528 { 00:19:48.528 "method": "sock_set_default_impl", 00:19:48.528 "params": { 00:19:48.528 "impl_name": "posix" 00:19:48.528 } 00:19:48.528 }, 00:19:48.528 { 00:19:48.528 "method": "sock_impl_set_options", 00:19:48.528 "params": { 00:19:48.528 "impl_name": "ssl", 00:19:48.528 "recv_buf_size": 4096, 00:19:48.528 "send_buf_size": 4096, 00:19:48.528 "enable_recv_pipe": true, 00:19:48.528 "enable_quickack": false, 00:19:48.528 "enable_placement_id": 0, 00:19:48.528 "enable_zerocopy_send_server": true, 00:19:48.528 "enable_zerocopy_send_client": false, 00:19:48.528 "zerocopy_threshold": 0, 00:19:48.528 "tls_version": 0, 00:19:48.528 "enable_ktls": false 00:19:48.528 } 00:19:48.528 }, 00:19:48.528 { 00:19:48.528 "method": "sock_impl_set_options", 00:19:48.528 "params": { 00:19:48.528 "impl_name": "posix", 00:19:48.528 "recv_buf_size": 2097152, 00:19:48.528 "send_buf_size": 2097152, 00:19:48.528 "enable_recv_pipe": true, 00:19:48.528 "enable_quickack": false, 00:19:48.528 "enable_placement_id": 0, 00:19:48.528 "enable_zerocopy_send_server": true, 00:19:48.528 "enable_zerocopy_send_client": false, 00:19:48.528 "zerocopy_threshold": 0, 00:19:48.528 "tls_version": 0, 00:19:48.528 "enable_ktls": false 00:19:48.528 } 00:19:48.528 } 00:19:48.528 ] 00:19:48.528 }, 00:19:48.528 { 00:19:48.528 "subsystem": "vmd", 00:19:48.528 "config": [] 00:19:48.528 }, 00:19:48.528 { 00:19:48.528 "subsystem": "accel", 00:19:48.528 "config": [ 00:19:48.528 { 00:19:48.528 "method": "accel_set_options", 00:19:48.528 "params": { 00:19:48.528 "small_cache_size": 128, 00:19:48.528 "large_cache_size": 16, 00:19:48.528 "task_count": 2048, 00:19:48.528 "sequence_count": 2048, 00:19:48.528 "buf_count": 2048 00:19:48.528 } 00:19:48.528 } 00:19:48.528 ] 00:19:48.528 }, 00:19:48.528 { 00:19:48.528 "subsystem": "bdev", 00:19:48.528 "config": [ 00:19:48.528 { 00:19:48.528 "method": "bdev_set_options", 00:19:48.528 "params": { 00:19:48.529 "bdev_io_pool_size": 65535, 00:19:48.529 "bdev_io_cache_size": 256, 00:19:48.529 "bdev_auto_examine": true, 00:19:48.529 "iobuf_small_cache_size": 128, 00:19:48.529 "iobuf_large_cache_size": 16 00:19:48.529 } 00:19:48.529 }, 00:19:48.529 { 00:19:48.529 "method": "bdev_raid_set_options", 00:19:48.529 "params": { 00:19:48.529 "process_window_size_kb": 1024, 00:19:48.529 "process_max_bandwidth_mb_sec": 0 00:19:48.529 } 00:19:48.529 }, 00:19:48.529 { 00:19:48.529 "method": "bdev_iscsi_set_options", 00:19:48.529 "params": { 00:19:48.529 "timeout_sec": 30 00:19:48.529 } 00:19:48.529 }, 00:19:48.529 { 00:19:48.529 "method": "bdev_nvme_set_options", 00:19:48.529 "params": { 00:19:48.529 "action_on_timeout": "none", 00:19:48.529 "timeout_us": 0, 00:19:48.529 "timeout_admin_us": 0, 00:19:48.529 "keep_alive_timeout_ms": 10000, 00:19:48.529 "arbitration_burst": 0, 00:19:48.529 "low_priority_weight": 0, 00:19:48.529 "medium_priority_weight": 0, 00:19:48.529 "high_priority_weight": 0, 00:19:48.529 "nvme_adminq_poll_period_us": 10000, 00:19:48.529 "nvme_ioq_poll_period_us": 0, 00:19:48.529 "io_queue_requests": 0, 00:19:48.529 "delay_cmd_submit": true, 00:19:48.529 "transport_retry_count": 4, 00:19:48.529 "bdev_retry_count": 3, 00:19:48.529 "transport_ack_timeout": 0, 00:19:48.529 "ctrlr_loss_timeout_sec": 0, 00:19:48.529 "reconnect_delay_sec": 0, 00:19:48.529 "fast_io_fail_timeout_sec": 0, 00:19:48.529 "disable_auto_failback": false, 00:19:48.529 "generate_uuids": false, 00:19:48.529 "transport_tos": 0, 00:19:48.529 "nvme_error_stat": false, 00:19:48.529 "rdma_srq_size": 0, 00:19:48.529 "io_path_stat": false, 00:19:48.529 "allow_accel_sequence": false, 00:19:48.529 "rdma_max_cq_size": 0, 00:19:48.529 "rdma_cm_event_timeout_ms": 0, 00:19:48.529 "dhchap_digests": [ 00:19:48.529 "sha256", 00:19:48.529 "sha384", 00:19:48.529 "sha512" 00:19:48.529 ], 00:19:48.529 "dhchap_dhgroups": [ 00:19:48.529 "null", 00:19:48.529 "ffdhe2048", 00:19:48.529 "ffdhe3072", 00:19:48.529 "ffdhe4096", 00:19:48.529 "ffdhe6144", 00:19:48.529 "ffdhe8192" 00:19:48.529 ] 00:19:48.529 } 00:19:48.529 }, 00:19:48.529 { 00:19:48.529 "method": "bdev_nvme_set_hotplug", 00:19:48.529 "params": { 00:19:48.529 "period_us": 100000, 00:19:48.529 "enable": false 00:19:48.529 } 00:19:48.529 }, 00:19:48.529 { 00:19:48.529 "method": "bdev_malloc_create", 00:19:48.529 "params": { 00:19:48.529 "name": "malloc0", 00:19:48.529 "num_blocks": 8192, 00:19:48.529 "block_size": 4096, 00:19:48.529 "physical_block_size": 4096, 00:19:48.529 "uuid": "7624f3cf-2837-406a-85c2-d2aafdab617c", 00:19:48.529 "optimal_io_boundary": 0, 00:19:48.529 "md_size": 0, 00:19:48.529 "dif_type": 0, 00:19:48.529 "dif_is_head_of_md": false, 00:19:48.529 "dif_pi_format": 0 00:19:48.529 } 00:19:48.529 }, 00:19:48.529 { 00:19:48.529 "method": "bdev_wait_for_examine" 00:19:48.529 } 00:19:48.529 ] 00:19:48.529 }, 00:19:48.529 { 00:19:48.529 "subsystem": "nbd", 00:19:48.529 "config": [] 00:19:48.529 }, 00:19:48.529 { 00:19:48.529 "subsystem": "scheduler", 00:19:48.529 "config": [ 00:19:48.529 { 00:19:48.529 "method": "framework_set_scheduler", 00:19:48.529 "params": { 00:19:48.529 "name": "static" 00:19:48.529 } 00:19:48.529 } 00:19:48.529 ] 00:19:48.529 }, 00:19:48.529 { 00:19:48.529 "subsystem": "nvmf", 00:19:48.529 "config": [ 00:19:48.529 { 00:19:48.529 "method": "nvmf_set_config", 00:19:48.529 "params": { 00:19:48.529 "discovery_filter": "match_any", 00:19:48.529 "admin_cmd_passthru": { 00:19:48.529 "identify_ctrlr": false 00:19:48.529 } 00:19:48.529 } 00:19:48.529 }, 00:19:48.529 { 00:19:48.529 "method": "nvmf_set_max_subsystems", 00:19:48.529 "params": { 00:19:48.529 "max_subsystems": 1024 00:19:48.529 } 00:19:48.529 }, 00:19:48.529 { 00:19:48.529 "method": "nvmf_set_crdt", 00:19:48.529 "params": { 00:19:48.529 "crdt1": 0, 00:19:48.529 "crdt2": 0, 00:19:48.529 "crdt3": 0 00:19:48.529 } 00:19:48.529 }, 00:19:48.529 { 00:19:48.529 "method": "nvmf_create_transport", 00:19:48.529 "params": { 00:19:48.529 "trtype": "TCP", 00:19:48.529 "max_queue_depth": 128, 00:19:48.529 "max_io_qpairs_per_ctrlr": 127, 00:19:48.529 "in_capsule_data_size": 4096, 00:19:48.529 "max_io_size": 131072, 00:19:48.529 "io_unit_size": 131072, 00:19:48.529 "max_aq_depth": 128, 00:19:48.529 "num_shared_buffers": 511, 00:19:48.529 "buf_cache_size": 4294967295, 00:19:48.529 "dif_insert_or_strip": false, 00:19:48.529 "zcopy": false, 00:19:48.529 "c2h_success": false, 00:19:48.529 "sock_priority": 0, 00:19:48.529 "abort_timeout_sec": 1, 00:19:48.529 "ack_timeout": 0, 00:19:48.529 "data_wr_pool_size": 0 00:19:48.529 } 00:19:48.529 }, 00:19:48.529 { 00:19:48.529 "method": "nvmf_create_subsystem", 00:19:48.529 "params": { 00:19:48.529 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.529 "allow_any_host": false, 00:19:48.529 "serial_number": "SPDK00000000000001", 00:19:48.529 "model_number": "SPDK bdev Controller", 00:19:48.529 "max_namespaces": 10, 00:19:48.529 "min_cntlid": 1, 00:19:48.529 "max_cntlid": 65519, 00:19:48.529 "ana_reporting": false 00:19:48.529 } 00:19:48.529 }, 00:19:48.529 { 00:19:48.529 "method": "nvmf_subsystem_add_host", 00:19:48.529 "params": { 00:19:48.529 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.529 "host": "nqn.2016-06.io.spdk:host1", 00:19:48.529 "psk": "/tmp/tmp.2gPA53PIuq" 00:19:48.529 } 00:19:48.529 }, 00:19:48.529 { 00:19:48.529 "method": "nvmf_subsystem_add_ns", 00:19:48.529 "params": { 00:19:48.529 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.529 "namespace": { 00:19:48.529 "nsid": 1, 00:19:48.529 "bdev_name": "malloc0", 00:19:48.529 "nguid": "7624F3CF2837406A85C2D2AAFDAB617C", 00:19:48.529 "uuid": "7624f3cf-2837-406a-85c2-d2aafdab617c", 00:19:48.529 "no_auto_visible": false 00:19:48.529 } 00:19:48.529 } 00:19:48.529 }, 00:19:48.529 { 00:19:48.529 "method": "nvmf_subsystem_add_listener", 00:19:48.529 "params": { 00:19:48.529 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.529 "listen_address": { 00:19:48.529 "trtype": "TCP", 00:19:48.529 "adrfam": "IPv4", 00:19:48.529 "traddr": "10.0.0.2", 00:19:48.529 "trsvcid": "4420" 00:19:48.529 }, 00:19:48.529 "secure_channel": true 00:19:48.529 } 00:19:48.529 } 00:19:48.529 ] 00:19:48.529 } 00:19:48.529 ] 00:19:48.529 }' 00:19:48.529 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.529 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2720739 00:19:48.529 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:48.529 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2720739 00:19:48.529 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2720739 ']' 00:19:48.529 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.529 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:48.529 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.529 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:48.529 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.529 [2024-07-24 22:07:27.643420] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:19:48.529 [2024-07-24 22:07:27.643468] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:48.529 EAL: No free 2048 kB hugepages reported on node 1 00:19:48.530 [2024-07-24 22:07:27.714706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.805 [2024-07-24 22:07:27.787607] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:48.805 [2024-07-24 22:07:27.787644] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:48.805 [2024-07-24 22:07:27.787653] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:48.805 [2024-07-24 22:07:27.787663] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:48.805 [2024-07-24 22:07:27.787670] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:48.805 [2024-07-24 22:07:27.787729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:48.805 [2024-07-24 22:07:27.988759] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:49.114 [2024-07-24 22:07:28.010113] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:49.114 [2024-07-24 22:07:28.026155] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:49.114 [2024-07-24 22:07:28.026324] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:49.374 22:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:49.374 22:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:49.374 22:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:49.374 22:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:49.374 22:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:49.374 22:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:49.374 22:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=2721014 00:19:49.374 22:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 2721014 /var/tmp/bdevperf.sock 00:19:49.374 22:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2721014 ']' 00:19:49.374 22:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:49.374 22:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:49.374 22:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:49.374 22:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:49.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:49.374 22:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:49.374 22:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:19:49.374 "subsystems": [ 00:19:49.374 { 00:19:49.374 "subsystem": "keyring", 00:19:49.374 "config": [] 00:19:49.374 }, 00:19:49.374 { 00:19:49.374 "subsystem": "iobuf", 00:19:49.374 "config": [ 00:19:49.374 { 00:19:49.374 "method": "iobuf_set_options", 00:19:49.374 "params": { 00:19:49.374 "small_pool_count": 8192, 00:19:49.374 "large_pool_count": 1024, 00:19:49.374 "small_bufsize": 8192, 00:19:49.374 "large_bufsize": 135168 00:19:49.374 } 00:19:49.374 } 00:19:49.374 ] 00:19:49.374 }, 00:19:49.374 { 00:19:49.374 "subsystem": "sock", 00:19:49.374 "config": [ 00:19:49.374 { 00:19:49.374 "method": "sock_set_default_impl", 00:19:49.374 "params": { 00:19:49.374 "impl_name": "posix" 00:19:49.374 } 00:19:49.374 }, 00:19:49.374 { 00:19:49.374 "method": "sock_impl_set_options", 00:19:49.374 "params": { 00:19:49.374 "impl_name": "ssl", 00:19:49.374 "recv_buf_size": 4096, 00:19:49.374 "send_buf_size": 4096, 00:19:49.374 "enable_recv_pipe": true, 00:19:49.375 "enable_quickack": false, 00:19:49.375 "enable_placement_id": 0, 00:19:49.375 "enable_zerocopy_send_server": true, 00:19:49.375 "enable_zerocopy_send_client": false, 00:19:49.375 "zerocopy_threshold": 0, 00:19:49.375 "tls_version": 0, 00:19:49.375 "enable_ktls": false 00:19:49.375 } 00:19:49.375 }, 00:19:49.375 { 00:19:49.375 "method": "sock_impl_set_options", 00:19:49.375 "params": { 00:19:49.375 "impl_name": "posix", 00:19:49.375 "recv_buf_size": 2097152, 00:19:49.375 "send_buf_size": 2097152, 00:19:49.375 "enable_recv_pipe": true, 00:19:49.375 "enable_quickack": false, 00:19:49.375 "enable_placement_id": 0, 00:19:49.375 "enable_zerocopy_send_server": true, 00:19:49.375 "enable_zerocopy_send_client": false, 00:19:49.375 "zerocopy_threshold": 0, 00:19:49.375 "tls_version": 0, 00:19:49.375 "enable_ktls": false 00:19:49.375 } 00:19:49.375 } 00:19:49.375 ] 00:19:49.375 }, 00:19:49.375 { 00:19:49.375 "subsystem": "vmd", 00:19:49.375 "config": [] 00:19:49.375 }, 00:19:49.375 { 00:19:49.375 "subsystem": "accel", 00:19:49.375 "config": [ 00:19:49.375 { 00:19:49.375 "method": "accel_set_options", 00:19:49.375 "params": { 00:19:49.375 "small_cache_size": 128, 00:19:49.375 "large_cache_size": 16, 00:19:49.375 "task_count": 2048, 00:19:49.375 "sequence_count": 2048, 00:19:49.375 "buf_count": 2048 00:19:49.375 } 00:19:49.375 } 00:19:49.375 ] 00:19:49.375 }, 00:19:49.375 { 00:19:49.375 "subsystem": "bdev", 00:19:49.375 "config": [ 00:19:49.375 { 00:19:49.375 "method": "bdev_set_options", 00:19:49.375 "params": { 00:19:49.375 "bdev_io_pool_size": 65535, 00:19:49.375 "bdev_io_cache_size": 256, 00:19:49.375 "bdev_auto_examine": true, 00:19:49.375 "iobuf_small_cache_size": 128, 00:19:49.375 "iobuf_large_cache_size": 16 00:19:49.375 } 00:19:49.375 }, 00:19:49.375 { 00:19:49.375 "method": "bdev_raid_set_options", 00:19:49.375 "params": { 00:19:49.375 "process_window_size_kb": 1024, 00:19:49.375 "process_max_bandwidth_mb_sec": 0 00:19:49.375 } 00:19:49.375 }, 00:19:49.375 { 00:19:49.375 "method": "bdev_iscsi_set_options", 00:19:49.375 "params": { 00:19:49.375 "timeout_sec": 30 00:19:49.375 } 00:19:49.375 }, 00:19:49.375 { 00:19:49.375 "method": "bdev_nvme_set_options", 00:19:49.375 "params": { 00:19:49.375 "action_on_timeout": "none", 00:19:49.375 "timeout_us": 0, 00:19:49.375 "timeout_admin_us": 0, 00:19:49.375 "keep_alive_timeout_ms": 10000, 00:19:49.375 "arbitration_burst": 0, 00:19:49.375 "low_priority_weight": 0, 00:19:49.375 "medium_priority_weight": 0, 00:19:49.375 "high_priority_weight": 0, 00:19:49.375 "nvme_adminq_poll_period_us": 10000, 00:19:49.375 "nvme_ioq_poll_period_us": 0, 00:19:49.375 "io_queue_requests": 512, 00:19:49.375 "delay_cmd_submit": true, 00:19:49.375 "transport_retry_count": 4, 00:19:49.375 "bdev_retry_count": 3, 00:19:49.375 "transport_ack_timeout": 0, 00:19:49.375 "ctrlr_loss_timeout_sec": 0, 00:19:49.375 "reconnect_delay_sec": 0, 00:19:49.375 "fast_io_fail_timeout_sec": 0, 00:19:49.375 "disable_auto_failback": false, 00:19:49.375 "generate_uuids": false, 00:19:49.375 "transport_tos": 0, 00:19:49.375 "nvme_error_stat": false, 00:19:49.375 "rdma_srq_size": 0, 00:19:49.375 "io_path_stat": false, 00:19:49.375 "allow_accel_sequence": false, 00:19:49.375 "rdma_max_cq_size": 0, 00:19:49.375 "rdma_cm_event_timeout_ms": 0, 00:19:49.375 "dhchap_digests": [ 00:19:49.375 "sha256", 00:19:49.375 "sha384", 00:19:49.375 "sha512" 00:19:49.375 ], 00:19:49.375 "dhchap_dhgroups": [ 00:19:49.375 "null", 00:19:49.375 "ffdhe2048", 00:19:49.375 "ffdhe3072", 00:19:49.375 "ffdhe4096", 00:19:49.375 "ffdhe6144", 00:19:49.375 "ffdhe8192" 00:19:49.375 ] 00:19:49.375 } 00:19:49.375 }, 00:19:49.375 { 00:19:49.375 "method": "bdev_nvme_attach_controller", 00:19:49.375 "params": { 00:19:49.375 "name": "TLSTEST", 00:19:49.375 "trtype": "TCP", 00:19:49.375 "adrfam": "IPv4", 00:19:49.375 "traddr": "10.0.0.2", 00:19:49.375 "trsvcid": "4420", 00:19:49.375 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:49.375 "prchk_reftag": false, 00:19:49.375 "prchk_guard": false, 00:19:49.375 "ctrlr_loss_timeout_sec": 0, 00:19:49.375 "reconnect_delay_sec": 0, 00:19:49.375 "fast_io_fail_timeout_sec": 0, 00:19:49.375 "psk": "/tmp/tmp.2gPA53PIuq", 00:19:49.375 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:49.375 "hdgst": false, 00:19:49.375 "ddgst": false 00:19:49.375 } 00:19:49.375 }, 00:19:49.375 { 00:19:49.375 "method": "bdev_nvme_set_hotplug", 00:19:49.375 "params": { 00:19:49.375 "period_us": 100000, 00:19:49.375 "enable": false 00:19:49.375 } 00:19:49.375 }, 00:19:49.375 { 00:19:49.375 "method": "bdev_wait_for_examine" 00:19:49.375 } 00:19:49.375 ] 00:19:49.375 }, 00:19:49.375 { 00:19:49.375 "subsystem": "nbd", 00:19:49.375 "config": [] 00:19:49.375 } 00:19:49.375 ] 00:19:49.375 }' 00:19:49.375 22:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:49.375 [2024-07-24 22:07:28.550097] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:19:49.375 [2024-07-24 22:07:28.550148] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2721014 ] 00:19:49.375 EAL: No free 2048 kB hugepages reported on node 1 00:19:49.635 [2024-07-24 22:07:28.615636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.635 [2024-07-24 22:07:28.684468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:49.635 [2024-07-24 22:07:28.827044] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:49.635 [2024-07-24 22:07:28.827130] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:50.203 22:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:50.203 22:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:50.203 22:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:50.462 Running I/O for 10 seconds... 00:20:00.439 00:20:00.439 Latency(us) 00:20:00.439 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.439 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:00.439 Verification LBA range: start 0x0 length 0x2000 00:20:00.439 TLSTESTn1 : 10.02 5092.87 19.89 0.00 0.00 25086.37 6396.31 53057.95 00:20:00.439 =================================================================================================================== 00:20:00.439 Total : 5092.87 19.89 0.00 0.00 25086.37 6396.31 53057.95 00:20:00.439 0 00:20:00.439 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:00.439 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 2721014 00:20:00.439 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2721014 ']' 00:20:00.439 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2721014 00:20:00.439 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:00.439 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:00.439 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2721014 00:20:00.439 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:00.439 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:00.439 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2721014' 00:20:00.439 killing process with pid 2721014 00:20:00.439 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2721014 00:20:00.439 Received shutdown signal, test time was about 10.000000 seconds 00:20:00.439 00:20:00.439 Latency(us) 00:20:00.439 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.439 =================================================================================================================== 00:20:00.439 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:00.439 [2024-07-24 22:07:39.540240] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:00.439 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2721014 00:20:00.697 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 2720739 00:20:00.697 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2720739 ']' 00:20:00.697 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2720739 00:20:00.697 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:00.697 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:00.697 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2720739 00:20:00.697 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:00.697 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:00.697 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2720739' 00:20:00.697 killing process with pid 2720739 00:20:00.697 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2720739 00:20:00.697 [2024-07-24 22:07:39.772937] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:00.697 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2720739 00:20:00.955 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:20:00.955 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:00.955 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:00.955 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.955 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2722861 00:20:00.955 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:00.955 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2722861 00:20:00.955 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2722861 ']' 00:20:00.955 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.955 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:00.955 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.955 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:00.955 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.955 [2024-07-24 22:07:40.012076] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:20:00.955 [2024-07-24 22:07:40.012128] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.955 EAL: No free 2048 kB hugepages reported on node 1 00:20:00.955 [2024-07-24 22:07:40.087329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.955 [2024-07-24 22:07:40.156585] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:00.955 [2024-07-24 22:07:40.156627] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:00.955 [2024-07-24 22:07:40.156637] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:00.955 [2024-07-24 22:07:40.156646] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:00.955 [2024-07-24 22:07:40.156653] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:00.955 [2024-07-24 22:07:40.156675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.893 22:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:01.893 22:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:01.893 22:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:01.893 22:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:01.893 22:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.893 22:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:01.893 22:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.2gPA53PIuq 00:20:01.893 22:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.2gPA53PIuq 00:20:01.893 22:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:01.893 [2024-07-24 22:07:41.017079] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:01.893 22:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:02.152 22:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:02.152 [2024-07-24 22:07:41.361954] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:02.152 [2024-07-24 22:07:41.362138] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:02.411 22:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:02.411 malloc0 00:20:02.411 22:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:02.669 22:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2gPA53PIuq 00:20:02.669 [2024-07-24 22:07:41.867421] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:02.929 22:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=2723157 00:20:02.929 22:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:02.929 22:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:02.929 22:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 2723157 /var/tmp/bdevperf.sock 00:20:02.929 22:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2723157 ']' 00:20:02.929 22:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:02.929 22:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:02.929 22:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:02.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:02.929 22:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:02.929 22:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:02.929 [2024-07-24 22:07:41.935853] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:20:02.929 [2024-07-24 22:07:41.935906] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2723157 ] 00:20:02.929 EAL: No free 2048 kB hugepages reported on node 1 00:20:02.929 [2024-07-24 22:07:42.004253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.929 [2024-07-24 22:07:42.075280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:03.866 22:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:03.866 22:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:03.866 22:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.2gPA53PIuq 00:20:03.866 22:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:03.866 [2024-07-24 22:07:43.033985] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:04.124 nvme0n1 00:20:04.124 22:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:04.124 Running I/O for 1 seconds... 00:20:05.058 00:20:05.058 Latency(us) 00:20:05.058 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.058 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:05.058 Verification LBA range: start 0x0 length 0x2000 00:20:05.059 nvme0n1 : 1.03 3711.80 14.50 0.00 0.00 34022.65 6815.74 51380.22 00:20:05.059 =================================================================================================================== 00:20:05.059 Total : 3711.80 14.50 0.00 0.00 34022.65 6815.74 51380.22 00:20:05.059 0 00:20:05.059 22:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 2723157 00:20:05.059 22:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2723157 ']' 00:20:05.059 22:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2723157 00:20:05.059 22:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:05.059 22:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:05.059 22:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2723157 00:20:05.317 22:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:05.317 22:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:05.317 22:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2723157' 00:20:05.317 killing process with pid 2723157 00:20:05.317 22:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2723157 00:20:05.317 Received shutdown signal, test time was about 1.000000 seconds 00:20:05.317 00:20:05.317 Latency(us) 00:20:05.317 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.317 =================================================================================================================== 00:20:05.317 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:05.317 22:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2723157 00:20:05.317 22:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 2722861 00:20:05.317 22:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2722861 ']' 00:20:05.317 22:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2722861 00:20:05.317 22:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:05.317 22:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:05.317 22:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2722861 00:20:05.577 22:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:05.577 22:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:05.577 22:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2722861' 00:20:05.577 killing process with pid 2722861 00:20:05.577 22:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2722861 00:20:05.577 [2024-07-24 22:07:44.536602] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:05.577 22:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2722861 00:20:05.577 22:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:20:05.577 22:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:05.577 22:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:05.577 22:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:05.577 22:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2723696 00:20:05.577 22:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:05.577 22:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2723696 00:20:05.577 22:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2723696 ']' 00:20:05.577 22:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:05.577 22:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:05.577 22:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:05.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:05.577 22:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:05.577 22:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:05.577 [2024-07-24 22:07:44.783440] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:20:05.577 [2024-07-24 22:07:44.783493] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:05.836 EAL: No free 2048 kB hugepages reported on node 1 00:20:05.836 [2024-07-24 22:07:44.857388] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.836 [2024-07-24 22:07:44.919241] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:05.836 [2024-07-24 22:07:44.919282] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:05.836 [2024-07-24 22:07:44.919291] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:05.836 [2024-07-24 22:07:44.919299] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:05.836 [2024-07-24 22:07:44.919323] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:05.836 [2024-07-24 22:07:44.919345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.405 22:07:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:06.405 22:07:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:06.405 22:07:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:06.405 22:07:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:06.405 22:07:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.405 22:07:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:06.405 22:07:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:20:06.405 22:07:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.405 22:07:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.664 [2024-07-24 22:07:45.625584] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:06.664 malloc0 00:20:06.664 [2024-07-24 22:07:45.654070] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:06.664 [2024-07-24 22:07:45.662838] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:06.664 22:07:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.664 22:07:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=2723930 00:20:06.664 22:07:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:06.664 22:07:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 2723930 /var/tmp/bdevperf.sock 00:20:06.664 22:07:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2723930 ']' 00:20:06.664 22:07:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:06.664 22:07:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:06.664 22:07:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:06.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:06.664 22:07:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:06.664 22:07:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.664 [2024-07-24 22:07:45.736145] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:20:06.664 [2024-07-24 22:07:45.736191] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2723930 ] 00:20:06.664 EAL: No free 2048 kB hugepages reported on node 1 00:20:06.664 [2024-07-24 22:07:45.806865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.923 [2024-07-24 22:07:45.881770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.491 22:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:07.491 22:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:07.491 22:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.2gPA53PIuq 00:20:07.750 22:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:07.750 [2024-07-24 22:07:46.860028] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:07.750 nvme0n1 00:20:07.750 22:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:08.009 Running I/O for 1 seconds... 00:20:08.946 00:20:08.946 Latency(us) 00:20:08.946 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:08.946 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:08.946 Verification LBA range: start 0x0 length 0x2000 00:20:08.946 nvme0n1 : 1.03 1935.50 7.56 0.00 0.00 65332.27 6868.17 75497.47 00:20:08.946 =================================================================================================================== 00:20:08.946 Total : 1935.50 7.56 0.00 0.00 65332.27 6868.17 75497.47 00:20:08.946 0 00:20:08.946 22:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:20:08.946 22:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.946 22:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.207 22:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.207 22:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:20:09.207 "subsystems": [ 00:20:09.207 { 00:20:09.207 "subsystem": "keyring", 00:20:09.207 "config": [ 00:20:09.207 { 00:20:09.207 "method": "keyring_file_add_key", 00:20:09.207 "params": { 00:20:09.207 "name": "key0", 00:20:09.207 "path": "/tmp/tmp.2gPA53PIuq" 00:20:09.207 } 00:20:09.207 } 00:20:09.207 ] 00:20:09.207 }, 00:20:09.207 { 00:20:09.207 "subsystem": "iobuf", 00:20:09.207 "config": [ 00:20:09.207 { 00:20:09.207 "method": "iobuf_set_options", 00:20:09.207 "params": { 00:20:09.207 "small_pool_count": 8192, 00:20:09.207 "large_pool_count": 1024, 00:20:09.207 "small_bufsize": 8192, 00:20:09.207 "large_bufsize": 135168 00:20:09.207 } 00:20:09.207 } 00:20:09.207 ] 00:20:09.207 }, 00:20:09.207 { 00:20:09.207 "subsystem": "sock", 00:20:09.207 "config": [ 00:20:09.207 { 00:20:09.207 "method": "sock_set_default_impl", 00:20:09.207 "params": { 00:20:09.207 "impl_name": "posix" 00:20:09.207 } 00:20:09.207 }, 00:20:09.207 { 00:20:09.207 "method": "sock_impl_set_options", 00:20:09.207 "params": { 00:20:09.207 "impl_name": "ssl", 00:20:09.207 "recv_buf_size": 4096, 00:20:09.207 "send_buf_size": 4096, 00:20:09.207 "enable_recv_pipe": true, 00:20:09.207 "enable_quickack": false, 00:20:09.207 "enable_placement_id": 0, 00:20:09.207 "enable_zerocopy_send_server": true, 00:20:09.207 "enable_zerocopy_send_client": false, 00:20:09.207 "zerocopy_threshold": 0, 00:20:09.207 "tls_version": 0, 00:20:09.207 "enable_ktls": false 00:20:09.207 } 00:20:09.207 }, 00:20:09.207 { 00:20:09.207 "method": "sock_impl_set_options", 00:20:09.207 "params": { 00:20:09.207 "impl_name": "posix", 00:20:09.207 "recv_buf_size": 2097152, 00:20:09.207 "send_buf_size": 2097152, 00:20:09.207 "enable_recv_pipe": true, 00:20:09.207 "enable_quickack": false, 00:20:09.207 "enable_placement_id": 0, 00:20:09.207 "enable_zerocopy_send_server": true, 00:20:09.207 "enable_zerocopy_send_client": false, 00:20:09.207 "zerocopy_threshold": 0, 00:20:09.207 "tls_version": 0, 00:20:09.207 "enable_ktls": false 00:20:09.207 } 00:20:09.207 } 00:20:09.207 ] 00:20:09.207 }, 00:20:09.207 { 00:20:09.207 "subsystem": "vmd", 00:20:09.207 "config": [] 00:20:09.207 }, 00:20:09.207 { 00:20:09.207 "subsystem": "accel", 00:20:09.207 "config": [ 00:20:09.207 { 00:20:09.207 "method": "accel_set_options", 00:20:09.207 "params": { 00:20:09.207 "small_cache_size": 128, 00:20:09.207 "large_cache_size": 16, 00:20:09.207 "task_count": 2048, 00:20:09.207 "sequence_count": 2048, 00:20:09.207 "buf_count": 2048 00:20:09.207 } 00:20:09.207 } 00:20:09.207 ] 00:20:09.207 }, 00:20:09.207 { 00:20:09.207 "subsystem": "bdev", 00:20:09.207 "config": [ 00:20:09.207 { 00:20:09.207 "method": "bdev_set_options", 00:20:09.207 "params": { 00:20:09.207 "bdev_io_pool_size": 65535, 00:20:09.207 "bdev_io_cache_size": 256, 00:20:09.207 "bdev_auto_examine": true, 00:20:09.207 "iobuf_small_cache_size": 128, 00:20:09.207 "iobuf_large_cache_size": 16 00:20:09.207 } 00:20:09.207 }, 00:20:09.207 { 00:20:09.207 "method": "bdev_raid_set_options", 00:20:09.207 "params": { 00:20:09.207 "process_window_size_kb": 1024, 00:20:09.207 "process_max_bandwidth_mb_sec": 0 00:20:09.207 } 00:20:09.207 }, 00:20:09.207 { 00:20:09.207 "method": "bdev_iscsi_set_options", 00:20:09.207 "params": { 00:20:09.207 "timeout_sec": 30 00:20:09.207 } 00:20:09.207 }, 00:20:09.207 { 00:20:09.207 "method": "bdev_nvme_set_options", 00:20:09.207 "params": { 00:20:09.207 "action_on_timeout": "none", 00:20:09.207 "timeout_us": 0, 00:20:09.207 "timeout_admin_us": 0, 00:20:09.207 "keep_alive_timeout_ms": 10000, 00:20:09.207 "arbitration_burst": 0, 00:20:09.207 "low_priority_weight": 0, 00:20:09.207 "medium_priority_weight": 0, 00:20:09.207 "high_priority_weight": 0, 00:20:09.207 "nvme_adminq_poll_period_us": 10000, 00:20:09.207 "nvme_ioq_poll_period_us": 0, 00:20:09.207 "io_queue_requests": 0, 00:20:09.207 "delay_cmd_submit": true, 00:20:09.207 "transport_retry_count": 4, 00:20:09.207 "bdev_retry_count": 3, 00:20:09.207 "transport_ack_timeout": 0, 00:20:09.207 "ctrlr_loss_timeout_sec": 0, 00:20:09.207 "reconnect_delay_sec": 0, 00:20:09.207 "fast_io_fail_timeout_sec": 0, 00:20:09.207 "disable_auto_failback": false, 00:20:09.207 "generate_uuids": false, 00:20:09.207 "transport_tos": 0, 00:20:09.207 "nvme_error_stat": false, 00:20:09.207 "rdma_srq_size": 0, 00:20:09.207 "io_path_stat": false, 00:20:09.207 "allow_accel_sequence": false, 00:20:09.207 "rdma_max_cq_size": 0, 00:20:09.207 "rdma_cm_event_timeout_ms": 0, 00:20:09.207 "dhchap_digests": [ 00:20:09.207 "sha256", 00:20:09.207 "sha384", 00:20:09.207 "sha512" 00:20:09.207 ], 00:20:09.207 "dhchap_dhgroups": [ 00:20:09.207 "null", 00:20:09.207 "ffdhe2048", 00:20:09.207 "ffdhe3072", 00:20:09.207 "ffdhe4096", 00:20:09.207 "ffdhe6144", 00:20:09.207 "ffdhe8192" 00:20:09.207 ] 00:20:09.207 } 00:20:09.207 }, 00:20:09.207 { 00:20:09.207 "method": "bdev_nvme_set_hotplug", 00:20:09.207 "params": { 00:20:09.207 "period_us": 100000, 00:20:09.207 "enable": false 00:20:09.207 } 00:20:09.207 }, 00:20:09.207 { 00:20:09.207 "method": "bdev_malloc_create", 00:20:09.207 "params": { 00:20:09.207 "name": "malloc0", 00:20:09.207 "num_blocks": 8192, 00:20:09.207 "block_size": 4096, 00:20:09.207 "physical_block_size": 4096, 00:20:09.207 "uuid": "e3c5df40-21ce-440a-820a-3e1bf61bc6cf", 00:20:09.207 "optimal_io_boundary": 0, 00:20:09.207 "md_size": 0, 00:20:09.207 "dif_type": 0, 00:20:09.207 "dif_is_head_of_md": false, 00:20:09.207 "dif_pi_format": 0 00:20:09.207 } 00:20:09.207 }, 00:20:09.207 { 00:20:09.207 "method": "bdev_wait_for_examine" 00:20:09.207 } 00:20:09.207 ] 00:20:09.207 }, 00:20:09.207 { 00:20:09.207 "subsystem": "nbd", 00:20:09.207 "config": [] 00:20:09.207 }, 00:20:09.207 { 00:20:09.207 "subsystem": "scheduler", 00:20:09.207 "config": [ 00:20:09.207 { 00:20:09.207 "method": "framework_set_scheduler", 00:20:09.207 "params": { 00:20:09.207 "name": "static" 00:20:09.207 } 00:20:09.207 } 00:20:09.207 ] 00:20:09.207 }, 00:20:09.207 { 00:20:09.207 "subsystem": "nvmf", 00:20:09.207 "config": [ 00:20:09.207 { 00:20:09.207 "method": "nvmf_set_config", 00:20:09.207 "params": { 00:20:09.207 "discovery_filter": "match_any", 00:20:09.207 "admin_cmd_passthru": { 00:20:09.207 "identify_ctrlr": false 00:20:09.207 } 00:20:09.207 } 00:20:09.207 }, 00:20:09.207 { 00:20:09.207 "method": "nvmf_set_max_subsystems", 00:20:09.207 "params": { 00:20:09.208 "max_subsystems": 1024 00:20:09.208 } 00:20:09.208 }, 00:20:09.208 { 00:20:09.208 "method": "nvmf_set_crdt", 00:20:09.208 "params": { 00:20:09.208 "crdt1": 0, 00:20:09.208 "crdt2": 0, 00:20:09.208 "crdt3": 0 00:20:09.208 } 00:20:09.208 }, 00:20:09.208 { 00:20:09.208 "method": "nvmf_create_transport", 00:20:09.208 "params": { 00:20:09.208 "trtype": "TCP", 00:20:09.208 "max_queue_depth": 128, 00:20:09.208 "max_io_qpairs_per_ctrlr": 127, 00:20:09.208 "in_capsule_data_size": 4096, 00:20:09.208 "max_io_size": 131072, 00:20:09.208 "io_unit_size": 131072, 00:20:09.208 "max_aq_depth": 128, 00:20:09.208 "num_shared_buffers": 511, 00:20:09.208 "buf_cache_size": 4294967295, 00:20:09.208 "dif_insert_or_strip": false, 00:20:09.208 "zcopy": false, 00:20:09.208 "c2h_success": false, 00:20:09.208 "sock_priority": 0, 00:20:09.208 "abort_timeout_sec": 1, 00:20:09.208 "ack_timeout": 0, 00:20:09.208 "data_wr_pool_size": 0 00:20:09.208 } 00:20:09.208 }, 00:20:09.208 { 00:20:09.208 "method": "nvmf_create_subsystem", 00:20:09.208 "params": { 00:20:09.208 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.208 "allow_any_host": false, 00:20:09.208 "serial_number": "00000000000000000000", 00:20:09.208 "model_number": "SPDK bdev Controller", 00:20:09.208 "max_namespaces": 32, 00:20:09.208 "min_cntlid": 1, 00:20:09.208 "max_cntlid": 65519, 00:20:09.208 "ana_reporting": false 00:20:09.208 } 00:20:09.208 }, 00:20:09.208 { 00:20:09.208 "method": "nvmf_subsystem_add_host", 00:20:09.208 "params": { 00:20:09.208 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.208 "host": "nqn.2016-06.io.spdk:host1", 00:20:09.208 "psk": "key0" 00:20:09.208 } 00:20:09.208 }, 00:20:09.208 { 00:20:09.208 "method": "nvmf_subsystem_add_ns", 00:20:09.208 "params": { 00:20:09.208 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.208 "namespace": { 00:20:09.208 "nsid": 1, 00:20:09.208 "bdev_name": "malloc0", 00:20:09.208 "nguid": "E3C5DF4021CE440A820A3E1BF61BC6CF", 00:20:09.208 "uuid": "e3c5df40-21ce-440a-820a-3e1bf61bc6cf", 00:20:09.208 "no_auto_visible": false 00:20:09.208 } 00:20:09.208 } 00:20:09.208 }, 00:20:09.208 { 00:20:09.208 "method": "nvmf_subsystem_add_listener", 00:20:09.208 "params": { 00:20:09.208 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.208 "listen_address": { 00:20:09.208 "trtype": "TCP", 00:20:09.208 "adrfam": "IPv4", 00:20:09.208 "traddr": "10.0.0.2", 00:20:09.208 "trsvcid": "4420" 00:20:09.208 }, 00:20:09.208 "secure_channel": false, 00:20:09.208 "sock_impl": "ssl" 00:20:09.208 } 00:20:09.208 } 00:20:09.208 ] 00:20:09.208 } 00:20:09.208 ] 00:20:09.208 }' 00:20:09.208 22:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:09.468 22:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:20:09.468 "subsystems": [ 00:20:09.468 { 00:20:09.468 "subsystem": "keyring", 00:20:09.468 "config": [ 00:20:09.468 { 00:20:09.468 "method": "keyring_file_add_key", 00:20:09.468 "params": { 00:20:09.468 "name": "key0", 00:20:09.468 "path": "/tmp/tmp.2gPA53PIuq" 00:20:09.468 } 00:20:09.468 } 00:20:09.468 ] 00:20:09.468 }, 00:20:09.468 { 00:20:09.468 "subsystem": "iobuf", 00:20:09.468 "config": [ 00:20:09.468 { 00:20:09.468 "method": "iobuf_set_options", 00:20:09.468 "params": { 00:20:09.468 "small_pool_count": 8192, 00:20:09.468 "large_pool_count": 1024, 00:20:09.468 "small_bufsize": 8192, 00:20:09.468 "large_bufsize": 135168 00:20:09.468 } 00:20:09.468 } 00:20:09.468 ] 00:20:09.468 }, 00:20:09.468 { 00:20:09.468 "subsystem": "sock", 00:20:09.468 "config": [ 00:20:09.468 { 00:20:09.468 "method": "sock_set_default_impl", 00:20:09.468 "params": { 00:20:09.468 "impl_name": "posix" 00:20:09.468 } 00:20:09.468 }, 00:20:09.468 { 00:20:09.468 "method": "sock_impl_set_options", 00:20:09.468 "params": { 00:20:09.468 "impl_name": "ssl", 00:20:09.468 "recv_buf_size": 4096, 00:20:09.468 "send_buf_size": 4096, 00:20:09.468 "enable_recv_pipe": true, 00:20:09.468 "enable_quickack": false, 00:20:09.468 "enable_placement_id": 0, 00:20:09.468 "enable_zerocopy_send_server": true, 00:20:09.468 "enable_zerocopy_send_client": false, 00:20:09.468 "zerocopy_threshold": 0, 00:20:09.468 "tls_version": 0, 00:20:09.468 "enable_ktls": false 00:20:09.468 } 00:20:09.468 }, 00:20:09.468 { 00:20:09.468 "method": "sock_impl_set_options", 00:20:09.468 "params": { 00:20:09.468 "impl_name": "posix", 00:20:09.468 "recv_buf_size": 2097152, 00:20:09.468 "send_buf_size": 2097152, 00:20:09.468 "enable_recv_pipe": true, 00:20:09.468 "enable_quickack": false, 00:20:09.468 "enable_placement_id": 0, 00:20:09.468 "enable_zerocopy_send_server": true, 00:20:09.468 "enable_zerocopy_send_client": false, 00:20:09.468 "zerocopy_threshold": 0, 00:20:09.468 "tls_version": 0, 00:20:09.468 "enable_ktls": false 00:20:09.468 } 00:20:09.468 } 00:20:09.468 ] 00:20:09.468 }, 00:20:09.468 { 00:20:09.468 "subsystem": "vmd", 00:20:09.468 "config": [] 00:20:09.468 }, 00:20:09.468 { 00:20:09.468 "subsystem": "accel", 00:20:09.468 "config": [ 00:20:09.468 { 00:20:09.468 "method": "accel_set_options", 00:20:09.468 "params": { 00:20:09.468 "small_cache_size": 128, 00:20:09.468 "large_cache_size": 16, 00:20:09.468 "task_count": 2048, 00:20:09.468 "sequence_count": 2048, 00:20:09.468 "buf_count": 2048 00:20:09.468 } 00:20:09.468 } 00:20:09.468 ] 00:20:09.468 }, 00:20:09.468 { 00:20:09.468 "subsystem": "bdev", 00:20:09.468 "config": [ 00:20:09.468 { 00:20:09.468 "method": "bdev_set_options", 00:20:09.468 "params": { 00:20:09.468 "bdev_io_pool_size": 65535, 00:20:09.468 "bdev_io_cache_size": 256, 00:20:09.468 "bdev_auto_examine": true, 00:20:09.468 "iobuf_small_cache_size": 128, 00:20:09.468 "iobuf_large_cache_size": 16 00:20:09.468 } 00:20:09.468 }, 00:20:09.468 { 00:20:09.468 "method": "bdev_raid_set_options", 00:20:09.468 "params": { 00:20:09.468 "process_window_size_kb": 1024, 00:20:09.468 "process_max_bandwidth_mb_sec": 0 00:20:09.468 } 00:20:09.468 }, 00:20:09.468 { 00:20:09.468 "method": "bdev_iscsi_set_options", 00:20:09.468 "params": { 00:20:09.468 "timeout_sec": 30 00:20:09.468 } 00:20:09.468 }, 00:20:09.468 { 00:20:09.468 "method": "bdev_nvme_set_options", 00:20:09.468 "params": { 00:20:09.468 "action_on_timeout": "none", 00:20:09.468 "timeout_us": 0, 00:20:09.468 "timeout_admin_us": 0, 00:20:09.468 "keep_alive_timeout_ms": 10000, 00:20:09.468 "arbitration_burst": 0, 00:20:09.468 "low_priority_weight": 0, 00:20:09.468 "medium_priority_weight": 0, 00:20:09.468 "high_priority_weight": 0, 00:20:09.468 "nvme_adminq_poll_period_us": 10000, 00:20:09.468 "nvme_ioq_poll_period_us": 0, 00:20:09.468 "io_queue_requests": 512, 00:20:09.468 "delay_cmd_submit": true, 00:20:09.468 "transport_retry_count": 4, 00:20:09.468 "bdev_retry_count": 3, 00:20:09.468 "transport_ack_timeout": 0, 00:20:09.468 "ctrlr_loss_timeout_sec": 0, 00:20:09.468 "reconnect_delay_sec": 0, 00:20:09.468 "fast_io_fail_timeout_sec": 0, 00:20:09.468 "disable_auto_failback": false, 00:20:09.468 "generate_uuids": false, 00:20:09.468 "transport_tos": 0, 00:20:09.468 "nvme_error_stat": false, 00:20:09.468 "rdma_srq_size": 0, 00:20:09.468 "io_path_stat": false, 00:20:09.468 "allow_accel_sequence": false, 00:20:09.468 "rdma_max_cq_size": 0, 00:20:09.468 "rdma_cm_event_timeout_ms": 0, 00:20:09.468 "dhchap_digests": [ 00:20:09.468 "sha256", 00:20:09.468 "sha384", 00:20:09.468 "sha512" 00:20:09.468 ], 00:20:09.468 "dhchap_dhgroups": [ 00:20:09.468 "null", 00:20:09.468 "ffdhe2048", 00:20:09.468 "ffdhe3072", 00:20:09.468 "ffdhe4096", 00:20:09.468 "ffdhe6144", 00:20:09.468 "ffdhe8192" 00:20:09.468 ] 00:20:09.468 } 00:20:09.468 }, 00:20:09.468 { 00:20:09.468 "method": "bdev_nvme_attach_controller", 00:20:09.468 "params": { 00:20:09.468 "name": "nvme0", 00:20:09.468 "trtype": "TCP", 00:20:09.468 "adrfam": "IPv4", 00:20:09.468 "traddr": "10.0.0.2", 00:20:09.468 "trsvcid": "4420", 00:20:09.468 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.468 "prchk_reftag": false, 00:20:09.468 "prchk_guard": false, 00:20:09.468 "ctrlr_loss_timeout_sec": 0, 00:20:09.468 "reconnect_delay_sec": 0, 00:20:09.468 "fast_io_fail_timeout_sec": 0, 00:20:09.468 "psk": "key0", 00:20:09.468 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:09.468 "hdgst": false, 00:20:09.468 "ddgst": false 00:20:09.468 } 00:20:09.468 }, 00:20:09.468 { 00:20:09.468 "method": "bdev_nvme_set_hotplug", 00:20:09.468 "params": { 00:20:09.468 "period_us": 100000, 00:20:09.468 "enable": false 00:20:09.468 } 00:20:09.468 }, 00:20:09.468 { 00:20:09.468 "method": "bdev_enable_histogram", 00:20:09.468 "params": { 00:20:09.468 "name": "nvme0n1", 00:20:09.468 "enable": true 00:20:09.468 } 00:20:09.468 }, 00:20:09.468 { 00:20:09.468 "method": "bdev_wait_for_examine" 00:20:09.468 } 00:20:09.468 ] 00:20:09.468 }, 00:20:09.468 { 00:20:09.468 "subsystem": "nbd", 00:20:09.468 "config": [] 00:20:09.468 } 00:20:09.468 ] 00:20:09.468 }' 00:20:09.468 22:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 2723930 00:20:09.468 22:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2723930 ']' 00:20:09.468 22:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2723930 00:20:09.468 22:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:09.468 22:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:09.468 22:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2723930 00:20:09.468 22:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:09.468 22:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:09.468 22:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2723930' 00:20:09.468 killing process with pid 2723930 00:20:09.468 22:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2723930 00:20:09.468 Received shutdown signal, test time was about 1.000000 seconds 00:20:09.468 00:20:09.469 Latency(us) 00:20:09.469 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.469 =================================================================================================================== 00:20:09.469 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:09.469 22:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2723930 00:20:09.728 22:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 2723696 00:20:09.728 22:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2723696 ']' 00:20:09.728 22:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2723696 00:20:09.728 22:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:09.728 22:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:09.728 22:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2723696 00:20:09.728 22:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:09.728 22:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:09.728 22:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2723696' 00:20:09.728 killing process with pid 2723696 00:20:09.728 22:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2723696 00:20:09.728 22:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2723696 00:20:09.728 22:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:20:09.728 22:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:09.987 22:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:20:09.987 "subsystems": [ 00:20:09.987 { 00:20:09.987 "subsystem": "keyring", 00:20:09.987 "config": [ 00:20:09.987 { 00:20:09.987 "method": "keyring_file_add_key", 00:20:09.987 "params": { 00:20:09.987 "name": "key0", 00:20:09.987 "path": "/tmp/tmp.2gPA53PIuq" 00:20:09.987 } 00:20:09.987 } 00:20:09.987 ] 00:20:09.987 }, 00:20:09.987 { 00:20:09.987 "subsystem": "iobuf", 00:20:09.987 "config": [ 00:20:09.987 { 00:20:09.987 "method": "iobuf_set_options", 00:20:09.987 "params": { 00:20:09.987 "small_pool_count": 8192, 00:20:09.987 "large_pool_count": 1024, 00:20:09.987 "small_bufsize": 8192, 00:20:09.987 "large_bufsize": 135168 00:20:09.987 } 00:20:09.987 } 00:20:09.987 ] 00:20:09.987 }, 00:20:09.987 { 00:20:09.987 "subsystem": "sock", 00:20:09.987 "config": [ 00:20:09.987 { 00:20:09.987 "method": "sock_set_default_impl", 00:20:09.987 "params": { 00:20:09.987 "impl_name": "posix" 00:20:09.987 } 00:20:09.987 }, 00:20:09.987 { 00:20:09.987 "method": "sock_impl_set_options", 00:20:09.987 "params": { 00:20:09.987 "impl_name": "ssl", 00:20:09.987 "recv_buf_size": 4096, 00:20:09.987 "send_buf_size": 4096, 00:20:09.987 "enable_recv_pipe": true, 00:20:09.987 "enable_quickack": false, 00:20:09.987 "enable_placement_id": 0, 00:20:09.987 "enable_zerocopy_send_server": true, 00:20:09.987 "enable_zerocopy_send_client": false, 00:20:09.987 "zerocopy_threshold": 0, 00:20:09.987 "tls_version": 0, 00:20:09.987 "enable_ktls": false 00:20:09.987 } 00:20:09.987 }, 00:20:09.987 { 00:20:09.987 "method": "sock_impl_set_options", 00:20:09.987 "params": { 00:20:09.987 "impl_name": "posix", 00:20:09.987 "recv_buf_size": 2097152, 00:20:09.987 "send_buf_size": 2097152, 00:20:09.987 "enable_recv_pipe": true, 00:20:09.987 "enable_quickack": false, 00:20:09.987 "enable_placement_id": 0, 00:20:09.987 "enable_zerocopy_send_server": true, 00:20:09.987 "enable_zerocopy_send_client": false, 00:20:09.987 "zerocopy_threshold": 0, 00:20:09.987 "tls_version": 0, 00:20:09.987 "enable_ktls": false 00:20:09.987 } 00:20:09.987 } 00:20:09.987 ] 00:20:09.987 }, 00:20:09.987 { 00:20:09.987 "subsystem": "vmd", 00:20:09.988 "config": [] 00:20:09.988 }, 00:20:09.988 { 00:20:09.988 "subsystem": "accel", 00:20:09.988 "config": [ 00:20:09.988 { 00:20:09.988 "method": "accel_set_options", 00:20:09.988 "params": { 00:20:09.988 "small_cache_size": 128, 00:20:09.988 "large_cache_size": 16, 00:20:09.988 "task_count": 2048, 00:20:09.988 "sequence_count": 2048, 00:20:09.988 "buf_count": 2048 00:20:09.988 } 00:20:09.988 } 00:20:09.988 ] 00:20:09.988 }, 00:20:09.988 { 00:20:09.988 "subsystem": "bdev", 00:20:09.988 "config": [ 00:20:09.988 { 00:20:09.988 "method": "bdev_set_options", 00:20:09.988 "params": { 00:20:09.988 "bdev_io_pool_size": 65535, 00:20:09.988 "bdev_io_cache_size": 256, 00:20:09.988 "bdev_auto_examine": true, 00:20:09.988 "iobuf_small_cache_size": 128, 00:20:09.988 "iobuf_large_cache_size": 16 00:20:09.988 } 00:20:09.988 }, 00:20:09.988 { 00:20:09.988 "method": "bdev_raid_set_options", 00:20:09.988 "params": { 00:20:09.988 "process_window_size_kb": 1024, 00:20:09.988 "process_max_bandwidth_mb_sec": 0 00:20:09.988 } 00:20:09.988 }, 00:20:09.988 { 00:20:09.988 "method": "bdev_iscsi_set_options", 00:20:09.988 "params": { 00:20:09.988 "timeout_sec": 30 00:20:09.988 } 00:20:09.988 }, 00:20:09.988 { 00:20:09.988 "method": "bdev_nvme_set_options", 00:20:09.988 "params": { 00:20:09.988 "action_on_timeout": "none", 00:20:09.988 "timeout_us": 0, 00:20:09.988 "timeout_admin_us": 0, 00:20:09.988 "keep_alive_timeout_ms": 10000, 00:20:09.988 "arbitration_burst": 0, 00:20:09.988 "low_priority_weight": 0, 00:20:09.988 "medium_priority_weight": 0, 00:20:09.988 "high_priority_weight": 0, 00:20:09.988 "nvme_adminq_poll_period_us": 10000, 00:20:09.988 "nvme_ioq_poll_period_us": 0, 00:20:09.988 "io_queue_requests": 0, 00:20:09.988 "delay_cmd_submit": true, 00:20:09.988 "transport_retry_count": 4, 00:20:09.988 "bdev_retry_count": 3, 00:20:09.988 "transport_ack_timeout": 0, 00:20:09.988 "ctrlr_loss_timeout_sec": 0, 00:20:09.988 "reconnect_delay_sec": 0, 00:20:09.988 "fast_io_fail_timeout_sec": 0, 00:20:09.988 "disable_auto_failback": false, 00:20:09.988 "generate_uuids": false, 00:20:09.988 "transport_tos": 0, 00:20:09.988 "nvme_error_stat": false, 00:20:09.988 "rdma_srq_size": 0, 00:20:09.988 "io_path_stat": false, 00:20:09.988 "allow_accel_sequence": false, 00:20:09.988 "rdma_max_cq_size": 0, 00:20:09.988 "rdma_cm_event_timeout_ms": 0, 00:20:09.988 "dhchap_digests": [ 00:20:09.988 "sha256", 00:20:09.988 "sha384", 00:20:09.988 "sha512" 00:20:09.988 ], 00:20:09.988 "dhchap_dhgroups": [ 00:20:09.988 "null", 00:20:09.988 "ffdhe2048", 00:20:09.988 "ffdhe3072", 00:20:09.988 "ffdhe4096", 00:20:09.988 "ffdhe6144", 00:20:09.988 "ffdhe8192" 00:20:09.988 ] 00:20:09.988 } 00:20:09.988 }, 00:20:09.988 { 00:20:09.988 "method": "bdev_nvme_set_hotplug", 00:20:09.988 "params": { 00:20:09.988 "period_us": 100000, 00:20:09.988 "enable": false 00:20:09.988 } 00:20:09.988 }, 00:20:09.988 { 00:20:09.988 "method": "bdev_malloc_create", 00:20:09.988 "params": { 00:20:09.988 "name": "malloc0", 00:20:09.988 "num_blocks": 8192, 00:20:09.988 "block_size": 4096, 00:20:09.988 "physical_block_size": 4096, 00:20:09.988 "uuid": "e3c5df40-21ce-440a-820a-3e1bf61bc6cf", 00:20:09.988 "optimal_io_boundary": 0, 00:20:09.988 "md_size": 0, 00:20:09.988 "dif_type": 0, 00:20:09.988 "dif_is_head_of_md": false, 00:20:09.988 "dif_pi_format": 0 00:20:09.988 } 00:20:09.988 }, 00:20:09.988 { 00:20:09.988 "method": "bdev_wait_for_examine" 00:20:09.988 } 00:20:09.988 ] 00:20:09.988 }, 00:20:09.988 { 00:20:09.988 "subsystem": "nbd", 00:20:09.988 "config": [] 00:20:09.988 }, 00:20:09.988 { 00:20:09.988 "subsystem": "scheduler", 00:20:09.988 "config": [ 00:20:09.988 { 00:20:09.988 "method": "framework_set_scheduler", 00:20:09.988 "params": { 00:20:09.988 "name": "static" 00:20:09.988 } 00:20:09.988 } 00:20:09.988 ] 00:20:09.988 }, 00:20:09.988 { 00:20:09.988 "subsystem": "nvmf", 00:20:09.988 "config": [ 00:20:09.988 { 00:20:09.988 "method": "nvmf_set_config", 00:20:09.988 "params": { 00:20:09.988 "discovery_filter": "match_any", 00:20:09.988 "admin_cmd_passthru": { 00:20:09.988 "identify_ctrlr": false 00:20:09.988 } 00:20:09.988 } 00:20:09.988 }, 00:20:09.988 { 00:20:09.988 "method": "nvmf_set_max_subsystems", 00:20:09.988 "params": { 00:20:09.988 "max_subsystems": 1024 00:20:09.988 } 00:20:09.988 }, 00:20:09.988 { 00:20:09.988 "method": "nvmf_set_crdt", 00:20:09.988 "params": { 00:20:09.988 "crdt1": 0, 00:20:09.988 "crdt2": 0, 00:20:09.988 "crdt3": 0 00:20:09.988 } 00:20:09.988 }, 00:20:09.988 { 00:20:09.988 "method": "nvmf_create_transport", 00:20:09.988 "params": { 00:20:09.988 "trtype": "TCP", 00:20:09.988 "max_queue_depth": 128, 00:20:09.988 "max_io_qpairs_per_ctrlr": 127, 00:20:09.988 "in_capsule_data_size": 4096, 00:20:09.988 "max_io_size": 131072, 00:20:09.988 "io_unit_size": 131072, 00:20:09.988 "max_aq_depth": 128, 00:20:09.988 "num_shared_buffers": 511, 00:20:09.988 "buf_cache_size": 4294967295, 00:20:09.988 "dif_insert_or_strip": false, 00:20:09.988 "zcopy": false, 00:20:09.988 "c2h_success": false, 00:20:09.988 "sock_priority": 0, 00:20:09.988 "abort_timeout_sec": 1, 00:20:09.988 " 22:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:09.988 ack_timeout": 0, 00:20:09.988 "data_wr_pool_size": 0 00:20:09.988 } 00:20:09.988 }, 00:20:09.988 { 00:20:09.988 "method": "nvmf_create_subsystem", 00:20:09.988 "params": { 00:20:09.988 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.988 "allow_any_host": false, 00:20:09.988 "serial_number": "00000000000000000000", 00:20:09.988 "model_number": "SPDK bdev Controller", 00:20:09.988 "max_namespaces": 32, 00:20:09.988 "min_cntlid": 1, 00:20:09.988 "max_cntlid": 65519, 00:20:09.988 "ana_reporting": false 00:20:09.988 } 00:20:09.988 }, 00:20:09.988 { 00:20:09.988 "method": "nvmf_subsystem_add_host", 00:20:09.988 "params": { 00:20:09.988 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.988 "host": "nqn.2016-06.io.spdk:host1", 00:20:09.988 "psk": "key0" 00:20:09.988 } 00:20:09.988 }, 00:20:09.988 { 00:20:09.988 "method": "nvmf_subsystem_add_ns", 00:20:09.988 "params": { 00:20:09.988 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.988 "namespace": { 00:20:09.988 "nsid": 1, 00:20:09.988 "bdev_name": "malloc0", 00:20:09.988 "nguid": "E3C5DF4021CE440A820A3E1BF61BC6CF", 00:20:09.988 "uuid": "e3c5df40-21ce-440a-820a-3e1bf61bc6cf", 00:20:09.988 "no_auto_visible": false 00:20:09.988 } 00:20:09.988 } 00:20:09.988 }, 00:20:09.988 { 00:20:09.988 "method": "nvmf_subsystem_add_listener", 00:20:09.988 "params": { 00:20:09.988 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.988 "listen_address": { 00:20:09.988 "trtype": "TCP", 00:20:09.988 "adrfam": "IPv4", 00:20:09.988 "traddr": "10.0.0.2", 00:20:09.988 "trsvcid": "4420" 00:20:09.988 }, 00:20:09.988 "secure_channel": false, 00:20:09.988 "sock_impl": "ssl" 00:20:09.988 } 00:20:09.988 } 00:20:09.988 ] 00:20:09.988 } 00:20:09.988 ] 00:20:09.988 }' 00:20:09.988 22:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.988 22:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2724517 00:20:09.988 22:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:09.988 22:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2724517 00:20:09.988 22:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2724517 ']' 00:20:09.988 22:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.988 22:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:09.988 22:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:09.988 22:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:09.988 22:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.988 [2024-07-24 22:07:49.000203] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:20:09.988 [2024-07-24 22:07:49.000253] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:09.988 EAL: No free 2048 kB hugepages reported on node 1 00:20:09.988 [2024-07-24 22:07:49.071943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.988 [2024-07-24 22:07:49.144506] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:09.988 [2024-07-24 22:07:49.144543] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:09.988 [2024-07-24 22:07:49.144552] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:09.989 [2024-07-24 22:07:49.144580] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:09.989 [2024-07-24 22:07:49.144587] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:09.989 [2024-07-24 22:07:49.144638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.248 [2024-07-24 22:07:49.355306] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:10.248 [2024-07-24 22:07:49.397924] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:10.248 [2024-07-24 22:07:49.398110] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:10.876 22:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:10.876 22:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:10.876 22:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:10.876 22:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:10.876 22:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.876 22:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:10.876 22:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=2724550 00:20:10.876 22:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 2724550 /var/tmp/bdevperf.sock 00:20:10.876 22:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2724550 ']' 00:20:10.876 22:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:10.876 22:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:10.876 22:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:10.876 22:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:10.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:10.876 22:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:10.876 22:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:20:10.876 "subsystems": [ 00:20:10.876 { 00:20:10.876 "subsystem": "keyring", 00:20:10.876 "config": [ 00:20:10.876 { 00:20:10.876 "method": "keyring_file_add_key", 00:20:10.876 "params": { 00:20:10.876 "name": "key0", 00:20:10.876 "path": "/tmp/tmp.2gPA53PIuq" 00:20:10.876 } 00:20:10.876 } 00:20:10.876 ] 00:20:10.876 }, 00:20:10.876 { 00:20:10.876 "subsystem": "iobuf", 00:20:10.876 "config": [ 00:20:10.876 { 00:20:10.876 "method": "iobuf_set_options", 00:20:10.876 "params": { 00:20:10.876 "small_pool_count": 8192, 00:20:10.876 "large_pool_count": 1024, 00:20:10.876 "small_bufsize": 8192, 00:20:10.876 "large_bufsize": 135168 00:20:10.876 } 00:20:10.876 } 00:20:10.876 ] 00:20:10.876 }, 00:20:10.876 { 00:20:10.876 "subsystem": "sock", 00:20:10.876 "config": [ 00:20:10.876 { 00:20:10.876 "method": "sock_set_default_impl", 00:20:10.876 "params": { 00:20:10.876 "impl_name": "posix" 00:20:10.876 } 00:20:10.876 }, 00:20:10.876 { 00:20:10.876 "method": "sock_impl_set_options", 00:20:10.876 "params": { 00:20:10.876 "impl_name": "ssl", 00:20:10.876 "recv_buf_size": 4096, 00:20:10.876 "send_buf_size": 4096, 00:20:10.876 "enable_recv_pipe": true, 00:20:10.876 "enable_quickack": false, 00:20:10.876 "enable_placement_id": 0, 00:20:10.876 "enable_zerocopy_send_server": true, 00:20:10.876 "enable_zerocopy_send_client": false, 00:20:10.876 "zerocopy_threshold": 0, 00:20:10.876 "tls_version": 0, 00:20:10.876 "enable_ktls": false 00:20:10.876 } 00:20:10.876 }, 00:20:10.876 { 00:20:10.876 "method": "sock_impl_set_options", 00:20:10.876 "params": { 00:20:10.876 "impl_name": "posix", 00:20:10.876 "recv_buf_size": 2097152, 00:20:10.876 "send_buf_size": 2097152, 00:20:10.876 "enable_recv_pipe": true, 00:20:10.876 "enable_quickack": false, 00:20:10.876 "enable_placement_id": 0, 00:20:10.876 "enable_zerocopy_send_server": true, 00:20:10.876 "enable_zerocopy_send_client": false, 00:20:10.876 "zerocopy_threshold": 0, 00:20:10.876 "tls_version": 0, 00:20:10.876 "enable_ktls": false 00:20:10.876 } 00:20:10.876 } 00:20:10.876 ] 00:20:10.876 }, 00:20:10.876 { 00:20:10.876 "subsystem": "vmd", 00:20:10.876 "config": [] 00:20:10.876 }, 00:20:10.876 { 00:20:10.876 "subsystem": "accel", 00:20:10.876 "config": [ 00:20:10.876 { 00:20:10.876 "method": "accel_set_options", 00:20:10.876 "params": { 00:20:10.876 "small_cache_size": 128, 00:20:10.876 "large_cache_size": 16, 00:20:10.876 "task_count": 2048, 00:20:10.876 "sequence_count": 2048, 00:20:10.876 "buf_count": 2048 00:20:10.877 } 00:20:10.877 } 00:20:10.877 ] 00:20:10.877 }, 00:20:10.877 { 00:20:10.877 "subsystem": "bdev", 00:20:10.877 "config": [ 00:20:10.877 { 00:20:10.877 "method": "bdev_set_options", 00:20:10.877 "params": { 00:20:10.877 "bdev_io_pool_size": 65535, 00:20:10.877 "bdev_io_cache_size": 256, 00:20:10.877 "bdev_auto_examine": true, 00:20:10.877 "iobuf_small_cache_size": 128, 00:20:10.877 "iobuf_large_cache_size": 16 00:20:10.877 } 00:20:10.877 }, 00:20:10.877 { 00:20:10.877 "method": "bdev_raid_set_options", 00:20:10.877 "params": { 00:20:10.877 "process_window_size_kb": 1024, 00:20:10.877 "process_max_bandwidth_mb_sec": 0 00:20:10.877 } 00:20:10.877 }, 00:20:10.877 { 00:20:10.877 "method": "bdev_iscsi_set_options", 00:20:10.877 "params": { 00:20:10.877 "timeout_sec": 30 00:20:10.877 } 00:20:10.877 }, 00:20:10.877 { 00:20:10.877 "method": "bdev_nvme_set_options", 00:20:10.877 "params": { 00:20:10.877 "action_on_timeout": "none", 00:20:10.877 "timeout_us": 0, 00:20:10.877 "timeout_admin_us": 0, 00:20:10.877 "keep_alive_timeout_ms": 10000, 00:20:10.877 "arbitration_burst": 0, 00:20:10.877 "low_priority_weight": 0, 00:20:10.877 "medium_priority_weight": 0, 00:20:10.877 "high_priority_weight": 0, 00:20:10.877 "nvme_adminq_poll_period_us": 10000, 00:20:10.877 "nvme_ioq_poll_period_us": 0, 00:20:10.877 "io_queue_requests": 512, 00:20:10.877 "delay_cmd_submit": true, 00:20:10.877 "transport_retry_count": 4, 00:20:10.877 "bdev_retry_count": 3, 00:20:10.877 "transport_ack_timeout": 0, 00:20:10.877 "ctrlr_loss_timeout_sec": 0, 00:20:10.877 "reconnect_delay_sec": 0, 00:20:10.877 "fast_io_fail_timeout_sec": 0, 00:20:10.877 "disable_auto_failback": false, 00:20:10.877 "generate_uuids": false, 00:20:10.877 "transport_tos": 0, 00:20:10.877 "nvme_error_stat": false, 00:20:10.877 "rdma_srq_size": 0, 00:20:10.877 "io_path_stat": false, 00:20:10.877 "allow_accel_sequence": false, 00:20:10.877 "rdma_max_cq_size": 0, 00:20:10.877 "rdma_cm_event_timeout_ms": 0, 00:20:10.877 "dhchap_digests": [ 00:20:10.877 "sha256", 00:20:10.877 "sha384", 00:20:10.877 "sha512" 00:20:10.877 ], 00:20:10.877 "dhchap_dhgroups": [ 00:20:10.877 "null", 00:20:10.877 "ffdhe2048", 00:20:10.877 "ffdhe3072", 00:20:10.877 "ffdhe4096", 00:20:10.877 "ffdhe6144", 00:20:10.877 "ffdhe8192" 00:20:10.877 ] 00:20:10.877 } 00:20:10.877 }, 00:20:10.877 { 00:20:10.877 "method": "bdev_nvme_attach_controller", 00:20:10.877 "params": { 00:20:10.877 "name": "nvme0", 00:20:10.877 "trtype": "TCP", 00:20:10.877 "adrfam": "IPv4", 00:20:10.877 "traddr": "10.0.0.2", 00:20:10.877 "trsvcid": "4420", 00:20:10.877 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.877 "prchk_reftag": false, 00:20:10.877 "prchk_guard": false, 00:20:10.877 "ctrlr_loss_timeout_sec": 0, 00:20:10.877 "reconnect_delay_sec": 0, 00:20:10.877 "fast_io_fail_timeout_sec": 0, 00:20:10.877 "psk": "key0", 00:20:10.877 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:10.877 "hdgst": false, 00:20:10.877 "ddgst": false 00:20:10.877 } 00:20:10.877 }, 00:20:10.877 { 00:20:10.877 "method": "bdev_nvme_set_hotplug", 00:20:10.877 "params": { 00:20:10.877 "period_us": 100000, 00:20:10.877 "enable": false 00:20:10.877 } 00:20:10.877 }, 00:20:10.877 { 00:20:10.877 "method": "bdev_enable_histogram", 00:20:10.877 "params": { 00:20:10.877 "name": "nvme0n1", 00:20:10.877 "enable": true 00:20:10.877 } 00:20:10.877 }, 00:20:10.877 { 00:20:10.877 "method": "bdev_wait_for_examine" 00:20:10.877 } 00:20:10.877 ] 00:20:10.877 }, 00:20:10.877 { 00:20:10.877 "subsystem": "nbd", 00:20:10.877 "config": [] 00:20:10.877 } 00:20:10.877 ] 00:20:10.877 }' 00:20:10.877 22:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.877 [2024-07-24 22:07:49.887191] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:20:10.877 [2024-07-24 22:07:49.887241] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2724550 ] 00:20:10.877 EAL: No free 2048 kB hugepages reported on node 1 00:20:10.877 [2024-07-24 22:07:49.956917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.877 [2024-07-24 22:07:50.028870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:11.136 [2024-07-24 22:07:50.178429] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:11.704 22:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:11.704 22:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:11.704 22:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:11.704 22:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:20:11.704 22:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.704 22:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:11.963 Running I/O for 1 seconds... 00:20:12.901 00:20:12.901 Latency(us) 00:20:12.901 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.901 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:12.901 Verification LBA range: start 0x0 length 0x2000 00:20:12.901 nvme0n1 : 1.03 4632.41 18.10 0.00 0.00 27300.62 4744.81 57881.40 00:20:12.901 =================================================================================================================== 00:20:12.901 Total : 4632.41 18.10 0.00 0.00 27300.62 4744.81 57881.40 00:20:12.901 0 00:20:12.901 22:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:20:12.901 22:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:20:12.901 22:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:12.901 22:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:20:12.901 22:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:20:12.901 22:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:20:12.901 22:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:12.901 22:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:20:12.901 22:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:20:12.901 22:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:20:12.901 22:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:12.901 nvmf_trace.0 00:20:12.901 22:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:20:12.901 22:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2724550 00:20:12.901 22:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2724550 ']' 00:20:12.901 22:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2724550 00:20:12.901 22:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:12.901 22:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:12.901 22:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2724550 00:20:13.160 22:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:13.160 22:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:13.160 22:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2724550' 00:20:13.160 killing process with pid 2724550 00:20:13.160 22:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2724550 00:20:13.160 Received shutdown signal, test time was about 1.000000 seconds 00:20:13.160 00:20:13.160 Latency(us) 00:20:13.160 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.160 =================================================================================================================== 00:20:13.160 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:13.160 22:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2724550 00:20:13.160 22:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:13.160 22:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:13.160 22:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:20:13.160 22:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:13.160 22:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:20:13.160 22:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:13.160 22:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:13.160 rmmod nvme_tcp 00:20:13.160 rmmod nvme_fabrics 00:20:13.161 rmmod nvme_keyring 00:20:13.161 22:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:13.161 22:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:20:13.161 22:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:20:13.161 22:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 2724517 ']' 00:20:13.161 22:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 2724517 00:20:13.161 22:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2724517 ']' 00:20:13.161 22:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2724517 00:20:13.161 22:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:13.161 22:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:13.161 22:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2724517 00:20:13.420 22:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:13.420 22:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:13.420 22:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2724517' 00:20:13.420 killing process with pid 2724517 00:20:13.420 22:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2724517 00:20:13.420 22:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2724517 00:20:13.420 22:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:13.420 22:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:13.420 22:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:13.420 22:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:13.420 22:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:13.420 22:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.420 22:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:13.420 22:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.iGB6Arf8r8 /tmp/tmp.K5LIWACxIb /tmp/tmp.2gPA53PIuq 00:20:15.967 00:20:15.967 real 1m26.124s 00:20:15.967 user 2m5.741s 00:20:15.967 sys 0m35.318s 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.967 ************************************ 00:20:15.967 END TEST nvmf_tls 00:20:15.967 ************************************ 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:15.967 ************************************ 00:20:15.967 START TEST nvmf_fips 00:20:15.967 ************************************ 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:15.967 * Looking for test storage... 00:20:15.967 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:15.967 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:15.968 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:15.968 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:20:15.968 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:15.968 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:15.968 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:15.968 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:20:15.968 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:15.968 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:15.968 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:15.968 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:15.968 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:15.968 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:15.968 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:15.968 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:15.968 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:15.968 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:20:15.968 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:20:15.968 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:20:15.968 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:20:15.968 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:20:15.968 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:15.968 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:15.968 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:15.968 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:15.968 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:15.968 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:15.968 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:20:15.968 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:20:15.968 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:15.968 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:20:15.968 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:15.968 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:15.968 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:20:15.968 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:20:15.968 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:20:15.968 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:20:15.968 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:20:15.968 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:20:15.968 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:15.968 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:20:15.968 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:20:15.968 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:20:15.968 22:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:20:15.968 22:07:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:20:15.968 22:07:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:20:15.968 22:07:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:15.968 22:07:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:20:15.968 22:07:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:20:15.968 22:07:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:15.968 22:07:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:20:15.968 22:07:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:20:15.968 22:07:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:15.968 22:07:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:20:15.968 22:07:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:15.968 22:07:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:20:15.968 22:07:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:15.968 22:07:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:20:15.968 22:07:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:20:15.968 22:07:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:20:15.968 Error setting digest 00:20:15.968 001250CB017F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:20:15.968 001250CB017F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:20:15.968 22:07:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:20:15.968 22:07:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:15.968 22:07:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:15.968 22:07:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:15.968 22:07:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:20:15.968 22:07:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:15.968 22:07:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:15.968 22:07:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:15.968 22:07:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:15.968 22:07:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:15.968 22:07:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.968 22:07:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:15.968 22:07:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.968 22:07:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:15.968 22:07:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:15.968 22:07:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:20:15.968 22:07:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:22.535 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:22.535 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:22.535 Found net devices under 0000:af:00.0: cvl_0_0 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:22.535 Found net devices under 0000:af:00.1: cvl_0_1 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:22.535 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:22.536 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:22.536 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:22.536 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:22.536 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:22.536 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:22.536 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:22.536 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:22.536 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:22.536 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:22.536 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:22.536 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:22.536 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:20:22.536 00:20:22.536 --- 10.0.0.2 ping statistics --- 00:20:22.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.536 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:20:22.536 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:22.536 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:22.536 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:20:22.536 00:20:22.536 --- 10.0.0.1 ping statistics --- 00:20:22.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.536 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:20:22.536 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:22.536 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:20:22.536 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:22.536 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:22.536 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:22.536 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:22.536 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:22.536 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:22.536 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:22.536 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:20:22.536 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:22.536 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:22.536 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:22.536 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=2728769 00:20:22.536 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:22.536 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 2728769 00:20:22.536 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 2728769 ']' 00:20:22.536 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:22.536 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:22.536 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:22.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:22.536 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:22.536 22:08:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:22.536 [2024-07-24 22:08:01.603020] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:20:22.536 [2024-07-24 22:08:01.603070] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:22.536 EAL: No free 2048 kB hugepages reported on node 1 00:20:22.536 [2024-07-24 22:08:01.673274] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.536 [2024-07-24 22:08:01.740457] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:22.536 [2024-07-24 22:08:01.740496] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:22.536 [2024-07-24 22:08:01.740506] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:22.536 [2024-07-24 22:08:01.740515] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:22.536 [2024-07-24 22:08:01.740522] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:22.536 [2024-07-24 22:08:01.740545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:23.472 22:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:23.472 22:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:20:23.472 22:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:23.472 22:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:23.472 22:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:23.472 22:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:23.472 22:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:20:23.472 22:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:23.472 22:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:23.472 22:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:23.472 22:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:23.472 22:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:23.472 22:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:23.472 22:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:23.472 [2024-07-24 22:08:02.586728] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:23.472 [2024-07-24 22:08:02.602733] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:23.472 [2024-07-24 22:08:02.602924] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:23.472 [2024-07-24 22:08:02.631139] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:23.472 malloc0 00:20:23.472 22:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:23.472 22:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=2728845 00:20:23.472 22:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 2728845 /var/tmp/bdevperf.sock 00:20:23.472 22:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:23.472 22:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 2728845 ']' 00:20:23.472 22:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:23.472 22:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:23.473 22:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:23.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:23.473 22:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:23.473 22:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:23.732 [2024-07-24 22:08:02.713210] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:20:23.732 [2024-07-24 22:08:02.713264] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2728845 ] 00:20:23.732 EAL: No free 2048 kB hugepages reported on node 1 00:20:23.732 [2024-07-24 22:08:02.779167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.732 [2024-07-24 22:08:02.852451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:24.304 22:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:24.304 22:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:20:24.304 22:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:24.564 [2024-07-24 22:08:03.638123] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:24.564 [2024-07-24 22:08:03.638217] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:24.564 TLSTESTn1 00:20:24.564 22:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:24.823 Running I/O for 10 seconds... 00:20:34.803 00:20:34.803 Latency(us) 00:20:34.803 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.803 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:34.803 Verification LBA range: start 0x0 length 0x2000 00:20:34.803 TLSTESTn1 : 10.03 4971.76 19.42 0.00 0.00 25695.28 4718.59 47815.07 00:20:34.803 =================================================================================================================== 00:20:34.803 Total : 4971.76 19.42 0.00 0.00 25695.28 4718.59 47815.07 00:20:34.803 0 00:20:34.803 22:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:34.803 22:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:34.803 22:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:20:34.803 22:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:20:34.803 22:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:20:34.803 22:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:34.803 22:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:20:34.803 22:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:20:34.803 22:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:20:34.803 22:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:34.803 nvmf_trace.0 00:20:34.803 22:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:20:34.803 22:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2728845 00:20:34.803 22:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 2728845 ']' 00:20:34.803 22:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 2728845 00:20:34.803 22:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:20:34.803 22:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:34.803 22:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2728845 00:20:35.062 22:08:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:35.062 22:08:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:35.062 22:08:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2728845' 00:20:35.062 killing process with pid 2728845 00:20:35.062 22:08:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 2728845 00:20:35.062 Received shutdown signal, test time was about 10.000000 seconds 00:20:35.062 00:20:35.062 Latency(us) 00:20:35.062 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.062 =================================================================================================================== 00:20:35.062 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:35.062 [2024-07-24 22:08:14.029038] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:35.062 22:08:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 2728845 00:20:35.062 22:08:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:35.062 22:08:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:35.062 22:08:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:20:35.062 22:08:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:35.062 22:08:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:20:35.062 22:08:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:35.062 22:08:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:35.062 rmmod nvme_tcp 00:20:35.062 rmmod nvme_fabrics 00:20:35.062 rmmod nvme_keyring 00:20:35.062 22:08:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:35.062 22:08:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:20:35.062 22:08:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:20:35.062 22:08:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 2728769 ']' 00:20:35.062 22:08:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 2728769 00:20:35.062 22:08:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 2728769 ']' 00:20:35.062 22:08:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 2728769 00:20:35.062 22:08:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:20:35.062 22:08:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:35.321 22:08:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2728769 00:20:35.321 22:08:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:35.321 22:08:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:35.321 22:08:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2728769' 00:20:35.321 killing process with pid 2728769 00:20:35.321 22:08:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 2728769 00:20:35.321 [2024-07-24 22:08:14.321994] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:35.321 22:08:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 2728769 00:20:35.321 22:08:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:35.321 22:08:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:35.321 22:08:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:35.321 22:08:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:35.321 22:08:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:35.321 22:08:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.321 22:08:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:35.321 22:08:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.898 22:08:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:37.898 22:08:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:37.898 00:20:37.898 real 0m21.824s 00:20:37.898 user 0m21.447s 00:20:37.898 sys 0m11.058s 00:20:37.898 22:08:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:37.898 22:08:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:37.898 ************************************ 00:20:37.898 END TEST nvmf_fips 00:20:37.898 ************************************ 00:20:37.898 22:08:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:20:37.898 22:08:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:20:37.898 22:08:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:20:37.898 22:08:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:20:37.898 22:08:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:20:37.898 22:08:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:44.469 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:44.469 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:44.469 Found net devices under 0000:af:00.0: cvl_0_0 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:44.469 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.470 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:44.470 Found net devices under 0000:af:00.1: cvl_0_1 00:20:44.470 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.470 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:44.470 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:44.470 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:20:44.470 22:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:44.470 22:08:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:44.470 22:08:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:44.470 22:08:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:44.470 ************************************ 00:20:44.470 START TEST nvmf_perf_adq 00:20:44.470 ************************************ 00:20:44.470 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:44.470 * Looking for test storage... 00:20:44.470 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:44.470 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:44.470 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:44.470 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:44.470 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:44.470 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:44.470 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:44.470 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:44.470 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:44.470 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:44.470 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:44.470 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:44.470 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:44.470 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:44.470 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:20:44.470 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:44.470 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:44.470 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:44.470 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:44.470 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:44.470 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:44.470 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:44.470 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:44.470 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.470 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.470 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.470 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:44.470 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.470 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:20:44.470 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:44.470 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:44.470 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:44.470 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:44.470 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:44.470 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:44.470 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:44.470 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:44.470 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:44.470 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:44.470 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:51.043 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:51.043 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:51.043 Found net devices under 0000:af:00.0: cvl_0_0 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:51.043 Found net devices under 0000:af:00.1: cvl_0_1 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:20:51.043 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:20:52.422 22:08:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:20:54.329 22:08:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:59.607 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:59.607 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:59.607 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:59.608 Found net devices under 0000:af:00.0: cvl_0_0 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:59.608 Found net devices under 0000:af:00.1: cvl_0_1 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:59.608 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:59.608 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:20:59.608 00:20:59.608 --- 10.0.0.2 ping statistics --- 00:20:59.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.608 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:59.608 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:59.608 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:20:59.608 00:20:59.608 --- 10.0.0.1 ping statistics --- 00:20:59.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.608 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2739227 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2739227 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 2739227 ']' 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:59.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:59.608 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:59.608 [2024-07-24 22:08:38.778033] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:20:59.608 [2024-07-24 22:08:38.778082] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:59.608 EAL: No free 2048 kB hugepages reported on node 1 00:20:59.868 [2024-07-24 22:08:38.852080] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:59.868 [2024-07-24 22:08:38.927537] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:59.868 [2024-07-24 22:08:38.927575] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:59.868 [2024-07-24 22:08:38.927584] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:59.868 [2024-07-24 22:08:38.927593] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:59.868 [2024-07-24 22:08:38.927600] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:59.868 [2024-07-24 22:08:38.927647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:59.868 [2024-07-24 22:08:38.927763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:59.868 [2024-07-24 22:08:38.927789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:59.868 [2024-07-24 22:08:38.927793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.437 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:00.437 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:21:00.437 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:00.437 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:00.437 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:00.437 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:00.437 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:21:00.437 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:00.437 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:00.437 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.437 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:00.437 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.696 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:00.696 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:00.696 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.696 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:00.696 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.696 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:00.696 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.696 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:00.696 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.696 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:00.696 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.696 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:00.696 [2024-07-24 22:08:39.778226] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:00.696 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.696 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:00.696 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.696 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:00.696 Malloc1 00:21:00.696 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.696 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:00.696 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.696 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:00.696 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.696 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:00.696 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.696 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:00.696 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.696 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:00.696 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.696 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:00.696 [2024-07-24 22:08:39.824845] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:00.696 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.696 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=2739514 00:21:00.696 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:21:00.696 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:00.696 EAL: No free 2048 kB hugepages reported on node 1 00:21:03.232 22:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:21:03.232 22:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.232 22:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:03.232 22:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.232 22:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:21:03.232 "tick_rate": 2500000000, 00:21:03.233 "poll_groups": [ 00:21:03.233 { 00:21:03.233 "name": "nvmf_tgt_poll_group_000", 00:21:03.233 "admin_qpairs": 1, 00:21:03.233 "io_qpairs": 1, 00:21:03.233 "current_admin_qpairs": 1, 00:21:03.233 "current_io_qpairs": 1, 00:21:03.233 "pending_bdev_io": 0, 00:21:03.233 "completed_nvme_io": 20405, 00:21:03.233 "transports": [ 00:21:03.233 { 00:21:03.233 "trtype": "TCP" 00:21:03.233 } 00:21:03.233 ] 00:21:03.233 }, 00:21:03.233 { 00:21:03.233 "name": "nvmf_tgt_poll_group_001", 00:21:03.233 "admin_qpairs": 0, 00:21:03.233 "io_qpairs": 1, 00:21:03.233 "current_admin_qpairs": 0, 00:21:03.233 "current_io_qpairs": 1, 00:21:03.233 "pending_bdev_io": 0, 00:21:03.233 "completed_nvme_io": 20520, 00:21:03.233 "transports": [ 00:21:03.233 { 00:21:03.233 "trtype": "TCP" 00:21:03.233 } 00:21:03.233 ] 00:21:03.233 }, 00:21:03.233 { 00:21:03.233 "name": "nvmf_tgt_poll_group_002", 00:21:03.233 "admin_qpairs": 0, 00:21:03.233 "io_qpairs": 1, 00:21:03.233 "current_admin_qpairs": 0, 00:21:03.233 "current_io_qpairs": 1, 00:21:03.233 "pending_bdev_io": 0, 00:21:03.233 "completed_nvme_io": 20723, 00:21:03.233 "transports": [ 00:21:03.233 { 00:21:03.233 "trtype": "TCP" 00:21:03.233 } 00:21:03.233 ] 00:21:03.233 }, 00:21:03.233 { 00:21:03.233 "name": "nvmf_tgt_poll_group_003", 00:21:03.233 "admin_qpairs": 0, 00:21:03.233 "io_qpairs": 1, 00:21:03.233 "current_admin_qpairs": 0, 00:21:03.233 "current_io_qpairs": 1, 00:21:03.233 "pending_bdev_io": 0, 00:21:03.233 "completed_nvme_io": 20387, 00:21:03.233 "transports": [ 00:21:03.233 { 00:21:03.233 "trtype": "TCP" 00:21:03.233 } 00:21:03.233 ] 00:21:03.233 } 00:21:03.233 ] 00:21:03.233 }' 00:21:03.233 22:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:03.233 22:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:21:03.233 22:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:21:03.233 22:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:21:03.233 22:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 2739514 00:21:11.424 Initializing NVMe Controllers 00:21:11.424 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:11.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:11.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:11.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:11.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:11.424 Initialization complete. Launching workers. 00:21:11.424 ======================================================== 00:21:11.424 Latency(us) 00:21:11.424 Device Information : IOPS MiB/s Average min max 00:21:11.424 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10542.30 41.18 6090.81 2013.73 46036.62 00:21:11.424 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10580.10 41.33 6048.55 2108.55 10414.53 00:21:11.424 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10795.80 42.17 5928.77 2152.00 10316.10 00:21:11.425 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10625.10 41.50 6023.37 1481.08 10548.84 00:21:11.425 ======================================================== 00:21:11.425 Total : 42543.30 166.18 6022.34 1481.08 46036.62 00:21:11.425 00:21:11.425 22:08:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:21:11.425 22:08:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:11.425 22:08:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:21:11.425 22:08:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:11.425 22:08:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:21:11.425 22:08:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:11.425 22:08:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:11.425 rmmod nvme_tcp 00:21:11.425 rmmod nvme_fabrics 00:21:11.425 rmmod nvme_keyring 00:21:11.425 22:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:11.425 22:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:21:11.425 22:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:21:11.425 22:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2739227 ']' 00:21:11.425 22:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2739227 00:21:11.425 22:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 2739227 ']' 00:21:11.425 22:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 2739227 00:21:11.425 22:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:21:11.425 22:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:11.425 22:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2739227 00:21:11.425 22:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:11.425 22:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:11.425 22:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2739227' 00:21:11.425 killing process with pid 2739227 00:21:11.425 22:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 2739227 00:21:11.425 22:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 2739227 00:21:11.425 22:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:11.425 22:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:11.425 22:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:11.425 22:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:11.425 22:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:11.425 22:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.425 22:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:11.425 22:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.330 22:08:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:13.330 22:08:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:21:13.330 22:08:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:21:14.710 22:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:21:17.243 22:08:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:22.516 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:22.516 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:22.517 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:22.517 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:22.517 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:22.517 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:22.517 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:22.517 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:22.517 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:22.517 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:22.517 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:22.517 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:22.517 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:22.517 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:22.517 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:22.517 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:22.517 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:22.517 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:22.517 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:22.517 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:22.517 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:22.517 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:22.517 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:22.517 Found net devices under 0000:af:00.0: cvl_0_0 00:21:22.517 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:22.517 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:22.517 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:22.517 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:22.517 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:22.517 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:22.517 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:22.517 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:22.517 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:22.517 Found net devices under 0000:af:00.1: cvl_0_1 00:21:22.517 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:22.517 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:22.517 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:21:22.517 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:22.517 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:22.517 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:22.517 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:22.517 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:22.517 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:22.517 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:22.517 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:22.517 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:22.517 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:22.517 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:22.517 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:22.517 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:22.517 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:22.517 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:22.517 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:22.517 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:22.517 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:22.517 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:22.517 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:22.517 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:22.517 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:22.517 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:22.517 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:22.517 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:21:22.517 00:21:22.517 --- 10.0.0.2 ping statistics --- 00:21:22.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.517 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:21:22.517 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:22.517 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:22.517 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:21:22.517 00:21:22.517 --- 10.0.0.1 ping statistics --- 00:21:22.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.517 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:21:22.517 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:22.517 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:21:22.517 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:22.517 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:22.517 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:22.518 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:22.518 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:22.518 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:22.518 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:22.518 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:21:22.518 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:22.518 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:22.518 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:22.518 net.core.busy_poll = 1 00:21:22.518 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:22.518 net.core.busy_read = 1 00:21:22.518 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:22.518 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:22.518 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:22.518 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:22.518 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:22.518 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:22.518 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:22.518 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:22.518 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:22.518 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2743674 00:21:22.518 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2743674 00:21:22.518 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:22.518 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 2743674 ']' 00:21:22.518 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:22.518 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:22.518 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:22.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:22.518 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:22.518 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:22.518 [2024-07-24 22:09:01.655121] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:21:22.518 [2024-07-24 22:09:01.655172] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:22.518 EAL: No free 2048 kB hugepages reported on node 1 00:21:22.778 [2024-07-24 22:09:01.730049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:22.778 [2024-07-24 22:09:01.805813] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:22.778 [2024-07-24 22:09:01.805852] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:22.778 [2024-07-24 22:09:01.805862] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:22.778 [2024-07-24 22:09:01.805870] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:22.778 [2024-07-24 22:09:01.805878] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:22.778 [2024-07-24 22:09:01.805924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:22.778 [2024-07-24 22:09:01.805969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:22.778 [2024-07-24 22:09:01.805942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:22.778 [2024-07-24 22:09:01.805968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:23.346 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:23.346 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:21:23.346 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:23.346 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:23.346 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:23.346 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:23.346 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:21:23.346 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:23.346 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:23.346 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.346 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:23.346 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.346 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:23.346 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:23.346 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.346 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:23.606 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.606 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:23.606 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.606 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:23.606 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.606 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:23.606 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.606 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:23.606 [2024-07-24 22:09:02.654647] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:23.606 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.606 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:23.606 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.606 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:23.606 Malloc1 00:21:23.606 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.606 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:23.606 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.606 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:23.606 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.606 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:23.606 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.606 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:23.606 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.606 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:23.606 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.606 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:23.606 [2024-07-24 22:09:02.705042] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:23.606 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.606 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=2743767 00:21:23.606 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:21:23.606 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:23.606 EAL: No free 2048 kB hugepages reported on node 1 00:21:25.513 22:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:21:25.513 22:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.513 22:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:25.772 22:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.772 22:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:21:25.772 "tick_rate": 2500000000, 00:21:25.772 "poll_groups": [ 00:21:25.772 { 00:21:25.772 "name": "nvmf_tgt_poll_group_000", 00:21:25.772 "admin_qpairs": 1, 00:21:25.772 "io_qpairs": 2, 00:21:25.772 "current_admin_qpairs": 1, 00:21:25.772 "current_io_qpairs": 2, 00:21:25.772 "pending_bdev_io": 0, 00:21:25.772 "completed_nvme_io": 30126, 00:21:25.772 "transports": [ 00:21:25.772 { 00:21:25.772 "trtype": "TCP" 00:21:25.772 } 00:21:25.772 ] 00:21:25.772 }, 00:21:25.772 { 00:21:25.772 "name": "nvmf_tgt_poll_group_001", 00:21:25.772 "admin_qpairs": 0, 00:21:25.772 "io_qpairs": 2, 00:21:25.772 "current_admin_qpairs": 0, 00:21:25.772 "current_io_qpairs": 2, 00:21:25.772 "pending_bdev_io": 0, 00:21:25.772 "completed_nvme_io": 29253, 00:21:25.772 "transports": [ 00:21:25.772 { 00:21:25.772 "trtype": "TCP" 00:21:25.772 } 00:21:25.772 ] 00:21:25.772 }, 00:21:25.772 { 00:21:25.772 "name": "nvmf_tgt_poll_group_002", 00:21:25.772 "admin_qpairs": 0, 00:21:25.772 "io_qpairs": 0, 00:21:25.772 "current_admin_qpairs": 0, 00:21:25.772 "current_io_qpairs": 0, 00:21:25.772 "pending_bdev_io": 0, 00:21:25.772 "completed_nvme_io": 0, 00:21:25.772 "transports": [ 00:21:25.772 { 00:21:25.772 "trtype": "TCP" 00:21:25.772 } 00:21:25.772 ] 00:21:25.772 }, 00:21:25.772 { 00:21:25.772 "name": "nvmf_tgt_poll_group_003", 00:21:25.772 "admin_qpairs": 0, 00:21:25.772 "io_qpairs": 0, 00:21:25.772 "current_admin_qpairs": 0, 00:21:25.772 "current_io_qpairs": 0, 00:21:25.772 "pending_bdev_io": 0, 00:21:25.772 "completed_nvme_io": 0, 00:21:25.772 "transports": [ 00:21:25.772 { 00:21:25.772 "trtype": "TCP" 00:21:25.772 } 00:21:25.772 ] 00:21:25.772 } 00:21:25.772 ] 00:21:25.772 }' 00:21:25.772 22:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:25.772 22:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:21:25.772 22:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:21:25.772 22:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:21:25.772 22:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 2743767 00:21:33.897 Initializing NVMe Controllers 00:21:33.897 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:33.897 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:33.897 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:33.897 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:33.897 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:33.897 Initialization complete. Launching workers. 00:21:33.897 ======================================================== 00:21:33.897 Latency(us) 00:21:33.897 Device Information : IOPS MiB/s Average min max 00:21:33.897 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7988.30 31.20 8014.26 1361.60 52357.27 00:21:33.897 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7456.70 29.13 8582.63 1344.44 52377.05 00:21:33.897 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7811.50 30.51 8195.26 1420.29 52704.24 00:21:33.897 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8001.10 31.25 8030.34 1369.27 51901.88 00:21:33.897 ======================================================== 00:21:33.897 Total : 31257.60 122.10 8199.20 1344.44 52704.24 00:21:33.897 00:21:33.897 22:09:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:21:33.897 22:09:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:33.897 22:09:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:21:33.897 22:09:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:33.897 22:09:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:21:33.897 22:09:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:33.897 22:09:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:33.897 rmmod nvme_tcp 00:21:33.897 rmmod nvme_fabrics 00:21:33.897 rmmod nvme_keyring 00:21:33.897 22:09:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:33.897 22:09:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:21:33.897 22:09:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:21:33.897 22:09:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2743674 ']' 00:21:33.897 22:09:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2743674 00:21:33.897 22:09:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 2743674 ']' 00:21:33.897 22:09:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 2743674 00:21:33.897 22:09:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:21:33.897 22:09:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:33.897 22:09:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2743674 00:21:33.897 22:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:33.897 22:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:33.897 22:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2743674' 00:21:33.897 killing process with pid 2743674 00:21:33.897 22:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 2743674 00:21:33.897 22:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 2743674 00:21:34.156 22:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:34.156 22:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:34.156 22:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:34.156 22:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:34.156 22:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:34.156 22:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:34.156 22:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:34.156 22:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.519 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:37.519 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:21:37.519 00:21:37.519 real 0m53.114s 00:21:37.519 user 2m46.762s 00:21:37.519 sys 0m14.100s 00:21:37.519 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:37.519 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:37.519 ************************************ 00:21:37.519 END TEST nvmf_perf_adq 00:21:37.519 ************************************ 00:21:37.519 22:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:37.519 22:09:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:37.519 22:09:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:37.519 22:09:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:37.519 ************************************ 00:21:37.519 START TEST nvmf_shutdown 00:21:37.519 ************************************ 00:21:37.519 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:37.519 * Looking for test storage... 00:21:37.519 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:37.519 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:37.519 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:37.519 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:37.519 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:37.519 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:37.519 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:37.519 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:37.519 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:37.519 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:37.519 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:37.519 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:37.519 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:37.519 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:37.519 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:21:37.519 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:37.519 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:37.519 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:37.519 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:37.519 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:37.519 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:37.519 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:37.519 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:37.519 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.520 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.520 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.520 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:37.520 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.520 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:21:37.520 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:37.520 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:37.520 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:37.520 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:37.520 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:37.520 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:37.520 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:37.520 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:37.520 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:37.520 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:37.520 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:37.520 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:37.520 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:37.520 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:37.520 ************************************ 00:21:37.520 START TEST nvmf_shutdown_tc1 00:21:37.520 ************************************ 00:21:37.520 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:21:37.520 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:21:37.520 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:37.520 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:37.520 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:37.520 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:37.520 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:37.520 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:37.520 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.520 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:37.520 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.520 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:37.520 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:37.520 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:37.520 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:44.092 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:44.092 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:44.092 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:44.092 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:44.092 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:44.092 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:44.092 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:44.092 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:21:44.092 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:44.092 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:21:44.092 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:21:44.092 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:21:44.092 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:21:44.092 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:21:44.092 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:44.092 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:44.092 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:44.092 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:44.092 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:44.092 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:44.092 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:44.092 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:44.092 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:44.092 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:44.092 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:44.092 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:44.092 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:44.092 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:44.093 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:44.093 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:44.093 Found net devices under 0000:af:00.0: cvl_0_0 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:44.093 Found net devices under 0000:af:00.1: cvl_0_1 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:44.093 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:44.352 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:44.352 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:44.352 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:44.352 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:44.352 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:44.352 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:44.352 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:44.352 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:44.352 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:21:44.352 00:21:44.352 --- 10.0.0.2 ping statistics --- 00:21:44.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.352 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:21:44.352 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:44.352 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:44.352 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:21:44.352 00:21:44.352 --- 10.0.0.1 ping statistics --- 00:21:44.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.352 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:21:44.352 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:44.352 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:21:44.352 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:44.352 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:44.352 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:44.352 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:44.352 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:44.352 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:44.352 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:44.610 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:44.610 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:44.610 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:44.611 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:44.611 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2749929 00:21:44.611 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:44.611 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2749929 00:21:44.611 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 2749929 ']' 00:21:44.611 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.611 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:44.611 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.611 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:44.611 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:44.611 [2024-07-24 22:09:23.626261] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:21:44.611 [2024-07-24 22:09:23.626309] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:44.611 EAL: No free 2048 kB hugepages reported on node 1 00:21:44.611 [2024-07-24 22:09:23.700600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:44.611 [2024-07-24 22:09:23.775004] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:44.611 [2024-07-24 22:09:23.775043] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:44.611 [2024-07-24 22:09:23.775053] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:44.611 [2024-07-24 22:09:23.775062] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:44.611 [2024-07-24 22:09:23.775085] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:44.611 [2024-07-24 22:09:23.775184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:44.611 [2024-07-24 22:09:23.775213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:44.611 [2024-07-24 22:09:23.775249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:44.611 [2024-07-24 22:09:23.775250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:45.547 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:45.547 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:21:45.547 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:45.547 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:45.547 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:45.547 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:45.547 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:45.547 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.547 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:45.547 [2024-07-24 22:09:24.480862] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:45.547 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.547 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:45.547 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:45.547 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:45.547 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:45.547 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:45.547 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:45.547 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:45.547 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:45.547 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:45.547 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:45.547 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:45.547 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:45.547 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:45.547 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:45.547 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:45.547 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:45.547 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:45.547 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:45.547 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:45.548 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:45.548 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:45.548 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:45.548 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:45.548 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:45.548 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:45.548 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:45.548 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.548 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:45.548 Malloc1 00:21:45.548 [2024-07-24 22:09:24.595756] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:45.548 Malloc2 00:21:45.548 Malloc3 00:21:45.548 Malloc4 00:21:45.548 Malloc5 00:21:45.807 Malloc6 00:21:45.807 Malloc7 00:21:45.807 Malloc8 00:21:45.807 Malloc9 00:21:45.807 Malloc10 00:21:45.807 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.807 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:45.807 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:45.807 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:46.067 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2750179 00:21:46.067 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2750179 /var/tmp/bdevperf.sock 00:21:46.067 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 2750179 ']' 00:21:46.067 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:46.067 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:46.067 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:46.067 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:46.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:46.067 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:46.067 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:46.067 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:46.067 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:46.067 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:46.067 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:46.067 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:46.067 { 00:21:46.067 "params": { 00:21:46.067 "name": "Nvme$subsystem", 00:21:46.067 "trtype": "$TEST_TRANSPORT", 00:21:46.067 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.067 "adrfam": "ipv4", 00:21:46.067 "trsvcid": "$NVMF_PORT", 00:21:46.067 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.067 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.067 "hdgst": ${hdgst:-false}, 00:21:46.067 "ddgst": ${ddgst:-false} 00:21:46.067 }, 00:21:46.067 "method": "bdev_nvme_attach_controller" 00:21:46.067 } 00:21:46.067 EOF 00:21:46.067 )") 00:21:46.067 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:46.067 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:46.068 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:46.068 { 00:21:46.068 "params": { 00:21:46.068 "name": "Nvme$subsystem", 00:21:46.068 "trtype": "$TEST_TRANSPORT", 00:21:46.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.068 "adrfam": "ipv4", 00:21:46.068 "trsvcid": "$NVMF_PORT", 00:21:46.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.068 "hdgst": ${hdgst:-false}, 00:21:46.068 "ddgst": ${ddgst:-false} 00:21:46.068 }, 00:21:46.068 "method": "bdev_nvme_attach_controller" 00:21:46.068 } 00:21:46.068 EOF 00:21:46.068 )") 00:21:46.068 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:46.068 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:46.068 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:46.068 { 00:21:46.068 "params": { 00:21:46.068 "name": "Nvme$subsystem", 00:21:46.068 "trtype": "$TEST_TRANSPORT", 00:21:46.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.068 "adrfam": "ipv4", 00:21:46.068 "trsvcid": "$NVMF_PORT", 00:21:46.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.068 "hdgst": ${hdgst:-false}, 00:21:46.068 "ddgst": ${ddgst:-false} 00:21:46.068 }, 00:21:46.068 "method": "bdev_nvme_attach_controller" 00:21:46.068 } 00:21:46.068 EOF 00:21:46.068 )") 00:21:46.068 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:46.068 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:46.068 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:46.068 { 00:21:46.068 "params": { 00:21:46.068 "name": "Nvme$subsystem", 00:21:46.068 "trtype": "$TEST_TRANSPORT", 00:21:46.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.068 "adrfam": "ipv4", 00:21:46.068 "trsvcid": "$NVMF_PORT", 00:21:46.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.068 "hdgst": ${hdgst:-false}, 00:21:46.068 "ddgst": ${ddgst:-false} 00:21:46.068 }, 00:21:46.068 "method": "bdev_nvme_attach_controller" 00:21:46.068 } 00:21:46.068 EOF 00:21:46.068 )") 00:21:46.068 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:46.068 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:46.068 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:46.068 { 00:21:46.068 "params": { 00:21:46.068 "name": "Nvme$subsystem", 00:21:46.068 "trtype": "$TEST_TRANSPORT", 00:21:46.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.068 "adrfam": "ipv4", 00:21:46.068 "trsvcid": "$NVMF_PORT", 00:21:46.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.068 "hdgst": ${hdgst:-false}, 00:21:46.068 "ddgst": ${ddgst:-false} 00:21:46.068 }, 00:21:46.068 "method": "bdev_nvme_attach_controller" 00:21:46.068 } 00:21:46.068 EOF 00:21:46.068 )") 00:21:46.068 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:46.068 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:46.068 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:46.068 { 00:21:46.068 "params": { 00:21:46.068 "name": "Nvme$subsystem", 00:21:46.068 "trtype": "$TEST_TRANSPORT", 00:21:46.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.068 "adrfam": "ipv4", 00:21:46.068 "trsvcid": "$NVMF_PORT", 00:21:46.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.068 "hdgst": ${hdgst:-false}, 00:21:46.068 "ddgst": ${ddgst:-false} 00:21:46.068 }, 00:21:46.068 "method": "bdev_nvme_attach_controller" 00:21:46.068 } 00:21:46.068 EOF 00:21:46.068 )") 00:21:46.068 [2024-07-24 22:09:25.073952] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:21:46.068 [2024-07-24 22:09:25.074008] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:46.068 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:46.068 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:46.068 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:46.068 { 00:21:46.068 "params": { 00:21:46.068 "name": "Nvme$subsystem", 00:21:46.068 "trtype": "$TEST_TRANSPORT", 00:21:46.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.068 "adrfam": "ipv4", 00:21:46.068 "trsvcid": "$NVMF_PORT", 00:21:46.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.068 "hdgst": ${hdgst:-false}, 00:21:46.068 "ddgst": ${ddgst:-false} 00:21:46.068 }, 00:21:46.068 "method": "bdev_nvme_attach_controller" 00:21:46.068 } 00:21:46.068 EOF 00:21:46.068 )") 00:21:46.068 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:46.068 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:46.068 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:46.068 { 00:21:46.068 "params": { 00:21:46.068 "name": "Nvme$subsystem", 00:21:46.068 "trtype": "$TEST_TRANSPORT", 00:21:46.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.068 "adrfam": "ipv4", 00:21:46.068 "trsvcid": "$NVMF_PORT", 00:21:46.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.068 "hdgst": ${hdgst:-false}, 00:21:46.068 "ddgst": ${ddgst:-false} 00:21:46.068 }, 00:21:46.068 "method": "bdev_nvme_attach_controller" 00:21:46.068 } 00:21:46.068 EOF 00:21:46.068 )") 00:21:46.068 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:46.068 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:46.068 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:46.068 { 00:21:46.068 "params": { 00:21:46.068 "name": "Nvme$subsystem", 00:21:46.068 "trtype": "$TEST_TRANSPORT", 00:21:46.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.068 "adrfam": "ipv4", 00:21:46.068 "trsvcid": "$NVMF_PORT", 00:21:46.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.068 "hdgst": ${hdgst:-false}, 00:21:46.068 "ddgst": ${ddgst:-false} 00:21:46.068 }, 00:21:46.068 "method": "bdev_nvme_attach_controller" 00:21:46.068 } 00:21:46.068 EOF 00:21:46.068 )") 00:21:46.068 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:46.068 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:46.068 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:46.068 { 00:21:46.068 "params": { 00:21:46.068 "name": "Nvme$subsystem", 00:21:46.068 "trtype": "$TEST_TRANSPORT", 00:21:46.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.068 "adrfam": "ipv4", 00:21:46.068 "trsvcid": "$NVMF_PORT", 00:21:46.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.068 "hdgst": ${hdgst:-false}, 00:21:46.068 "ddgst": ${ddgst:-false} 00:21:46.068 }, 00:21:46.068 "method": "bdev_nvme_attach_controller" 00:21:46.068 } 00:21:46.068 EOF 00:21:46.068 )") 00:21:46.068 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:46.068 EAL: No free 2048 kB hugepages reported on node 1 00:21:46.068 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:46.068 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:46.068 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:46.068 "params": { 00:21:46.068 "name": "Nvme1", 00:21:46.068 "trtype": "tcp", 00:21:46.068 "traddr": "10.0.0.2", 00:21:46.068 "adrfam": "ipv4", 00:21:46.068 "trsvcid": "4420", 00:21:46.068 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:46.068 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:46.068 "hdgst": false, 00:21:46.068 "ddgst": false 00:21:46.068 }, 00:21:46.068 "method": "bdev_nvme_attach_controller" 00:21:46.068 },{ 00:21:46.068 "params": { 00:21:46.068 "name": "Nvme2", 00:21:46.068 "trtype": "tcp", 00:21:46.068 "traddr": "10.0.0.2", 00:21:46.068 "adrfam": "ipv4", 00:21:46.068 "trsvcid": "4420", 00:21:46.068 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:46.068 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:46.068 "hdgst": false, 00:21:46.068 "ddgst": false 00:21:46.068 }, 00:21:46.068 "method": "bdev_nvme_attach_controller" 00:21:46.068 },{ 00:21:46.068 "params": { 00:21:46.068 "name": "Nvme3", 00:21:46.068 "trtype": "tcp", 00:21:46.069 "traddr": "10.0.0.2", 00:21:46.069 "adrfam": "ipv4", 00:21:46.069 "trsvcid": "4420", 00:21:46.069 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:46.069 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:46.069 "hdgst": false, 00:21:46.069 "ddgst": false 00:21:46.069 }, 00:21:46.069 "method": "bdev_nvme_attach_controller" 00:21:46.069 },{ 00:21:46.069 "params": { 00:21:46.069 "name": "Nvme4", 00:21:46.069 "trtype": "tcp", 00:21:46.069 "traddr": "10.0.0.2", 00:21:46.069 "adrfam": "ipv4", 00:21:46.069 "trsvcid": "4420", 00:21:46.069 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:46.069 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:46.069 "hdgst": false, 00:21:46.069 "ddgst": false 00:21:46.069 }, 00:21:46.069 "method": "bdev_nvme_attach_controller" 00:21:46.069 },{ 00:21:46.069 "params": { 00:21:46.069 "name": "Nvme5", 00:21:46.069 "trtype": "tcp", 00:21:46.069 "traddr": "10.0.0.2", 00:21:46.069 "adrfam": "ipv4", 00:21:46.069 "trsvcid": "4420", 00:21:46.069 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:46.069 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:46.069 "hdgst": false, 00:21:46.069 "ddgst": false 00:21:46.069 }, 00:21:46.069 "method": "bdev_nvme_attach_controller" 00:21:46.069 },{ 00:21:46.069 "params": { 00:21:46.069 "name": "Nvme6", 00:21:46.069 "trtype": "tcp", 00:21:46.069 "traddr": "10.0.0.2", 00:21:46.069 "adrfam": "ipv4", 00:21:46.069 "trsvcid": "4420", 00:21:46.069 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:46.069 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:46.069 "hdgst": false, 00:21:46.069 "ddgst": false 00:21:46.069 }, 00:21:46.069 "method": "bdev_nvme_attach_controller" 00:21:46.069 },{ 00:21:46.069 "params": { 00:21:46.069 "name": "Nvme7", 00:21:46.069 "trtype": "tcp", 00:21:46.069 "traddr": "10.0.0.2", 00:21:46.069 "adrfam": "ipv4", 00:21:46.069 "trsvcid": "4420", 00:21:46.069 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:46.069 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:46.069 "hdgst": false, 00:21:46.069 "ddgst": false 00:21:46.069 }, 00:21:46.069 "method": "bdev_nvme_attach_controller" 00:21:46.069 },{ 00:21:46.069 "params": { 00:21:46.069 "name": "Nvme8", 00:21:46.069 "trtype": "tcp", 00:21:46.069 "traddr": "10.0.0.2", 00:21:46.069 "adrfam": "ipv4", 00:21:46.069 "trsvcid": "4420", 00:21:46.069 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:46.069 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:46.069 "hdgst": false, 00:21:46.069 "ddgst": false 00:21:46.069 }, 00:21:46.069 "method": "bdev_nvme_attach_controller" 00:21:46.069 },{ 00:21:46.069 "params": { 00:21:46.069 "name": "Nvme9", 00:21:46.069 "trtype": "tcp", 00:21:46.069 "traddr": "10.0.0.2", 00:21:46.069 "adrfam": "ipv4", 00:21:46.069 "trsvcid": "4420", 00:21:46.069 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:46.069 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:46.069 "hdgst": false, 00:21:46.069 "ddgst": false 00:21:46.069 }, 00:21:46.069 "method": "bdev_nvme_attach_controller" 00:21:46.069 },{ 00:21:46.069 "params": { 00:21:46.069 "name": "Nvme10", 00:21:46.069 "trtype": "tcp", 00:21:46.069 "traddr": "10.0.0.2", 00:21:46.069 "adrfam": "ipv4", 00:21:46.069 "trsvcid": "4420", 00:21:46.069 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:46.069 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:46.069 "hdgst": false, 00:21:46.069 "ddgst": false 00:21:46.069 }, 00:21:46.069 "method": "bdev_nvme_attach_controller" 00:21:46.069 }' 00:21:46.069 [2024-07-24 22:09:25.148300] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.069 [2024-07-24 22:09:25.216455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.445 22:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:47.445 22:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:21:47.445 22:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:47.446 22:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.446 22:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:47.446 22:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.446 22:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2750179 00:21:47.446 22:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:21:47.446 22:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:21:48.824 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2750179 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:48.824 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2749929 00:21:48.824 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:48.824 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:48.824 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:48.824 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:48.824 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:48.824 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:48.824 { 00:21:48.824 "params": { 00:21:48.824 "name": "Nvme$subsystem", 00:21:48.824 "trtype": "$TEST_TRANSPORT", 00:21:48.824 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:48.824 "adrfam": "ipv4", 00:21:48.824 "trsvcid": "$NVMF_PORT", 00:21:48.824 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:48.824 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:48.824 "hdgst": ${hdgst:-false}, 00:21:48.824 "ddgst": ${ddgst:-false} 00:21:48.824 }, 00:21:48.824 "method": "bdev_nvme_attach_controller" 00:21:48.824 } 00:21:48.824 EOF 00:21:48.824 )") 00:21:48.824 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:48.824 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:48.824 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:48.824 { 00:21:48.824 "params": { 00:21:48.824 "name": "Nvme$subsystem", 00:21:48.824 "trtype": "$TEST_TRANSPORT", 00:21:48.824 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:48.824 "adrfam": "ipv4", 00:21:48.824 "trsvcid": "$NVMF_PORT", 00:21:48.824 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:48.824 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:48.824 "hdgst": ${hdgst:-false}, 00:21:48.824 "ddgst": ${ddgst:-false} 00:21:48.824 }, 00:21:48.824 "method": "bdev_nvme_attach_controller" 00:21:48.824 } 00:21:48.824 EOF 00:21:48.824 )") 00:21:48.824 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:48.824 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:48.824 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:48.824 { 00:21:48.824 "params": { 00:21:48.824 "name": "Nvme$subsystem", 00:21:48.824 "trtype": "$TEST_TRANSPORT", 00:21:48.824 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:48.824 "adrfam": "ipv4", 00:21:48.824 "trsvcid": "$NVMF_PORT", 00:21:48.824 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:48.824 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:48.824 "hdgst": ${hdgst:-false}, 00:21:48.824 "ddgst": ${ddgst:-false} 00:21:48.824 }, 00:21:48.824 "method": "bdev_nvme_attach_controller" 00:21:48.824 } 00:21:48.824 EOF 00:21:48.824 )") 00:21:48.825 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:48.825 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:48.825 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:48.825 { 00:21:48.825 "params": { 00:21:48.825 "name": "Nvme$subsystem", 00:21:48.825 "trtype": "$TEST_TRANSPORT", 00:21:48.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:48.825 "adrfam": "ipv4", 00:21:48.825 "trsvcid": "$NVMF_PORT", 00:21:48.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:48.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:48.825 "hdgst": ${hdgst:-false}, 00:21:48.825 "ddgst": ${ddgst:-false} 00:21:48.825 }, 00:21:48.825 "method": "bdev_nvme_attach_controller" 00:21:48.825 } 00:21:48.825 EOF 00:21:48.825 )") 00:21:48.825 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:48.825 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:48.825 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:48.825 { 00:21:48.825 "params": { 00:21:48.825 "name": "Nvme$subsystem", 00:21:48.825 "trtype": "$TEST_TRANSPORT", 00:21:48.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:48.825 "adrfam": "ipv4", 00:21:48.825 "trsvcid": "$NVMF_PORT", 00:21:48.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:48.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:48.825 "hdgst": ${hdgst:-false}, 00:21:48.825 "ddgst": ${ddgst:-false} 00:21:48.825 }, 00:21:48.825 "method": "bdev_nvme_attach_controller" 00:21:48.825 } 00:21:48.825 EOF 00:21:48.825 )") 00:21:48.825 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:48.825 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:48.825 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:48.825 { 00:21:48.825 "params": { 00:21:48.825 "name": "Nvme$subsystem", 00:21:48.825 "trtype": "$TEST_TRANSPORT", 00:21:48.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:48.825 "adrfam": "ipv4", 00:21:48.825 "trsvcid": "$NVMF_PORT", 00:21:48.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:48.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:48.825 "hdgst": ${hdgst:-false}, 00:21:48.825 "ddgst": ${ddgst:-false} 00:21:48.825 }, 00:21:48.825 "method": "bdev_nvme_attach_controller" 00:21:48.825 } 00:21:48.825 EOF 00:21:48.825 )") 00:21:48.825 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:48.825 [2024-07-24 22:09:27.707173] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:21:48.825 [2024-07-24 22:09:27.707229] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2750642 ] 00:21:48.825 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:48.825 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:48.825 { 00:21:48.825 "params": { 00:21:48.825 "name": "Nvme$subsystem", 00:21:48.825 "trtype": "$TEST_TRANSPORT", 00:21:48.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:48.825 "adrfam": "ipv4", 00:21:48.825 "trsvcid": "$NVMF_PORT", 00:21:48.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:48.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:48.825 "hdgst": ${hdgst:-false}, 00:21:48.825 "ddgst": ${ddgst:-false} 00:21:48.825 }, 00:21:48.825 "method": "bdev_nvme_attach_controller" 00:21:48.825 } 00:21:48.825 EOF 00:21:48.825 )") 00:21:48.825 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:48.825 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:48.825 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:48.825 { 00:21:48.825 "params": { 00:21:48.825 "name": "Nvme$subsystem", 00:21:48.825 "trtype": "$TEST_TRANSPORT", 00:21:48.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:48.825 "adrfam": "ipv4", 00:21:48.825 "trsvcid": "$NVMF_PORT", 00:21:48.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:48.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:48.825 "hdgst": ${hdgst:-false}, 00:21:48.825 "ddgst": ${ddgst:-false} 00:21:48.825 }, 00:21:48.825 "method": "bdev_nvme_attach_controller" 00:21:48.825 } 00:21:48.825 EOF 00:21:48.825 )") 00:21:48.825 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:48.825 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:48.825 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:48.825 { 00:21:48.825 "params": { 00:21:48.825 "name": "Nvme$subsystem", 00:21:48.825 "trtype": "$TEST_TRANSPORT", 00:21:48.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:48.825 "adrfam": "ipv4", 00:21:48.825 "trsvcid": "$NVMF_PORT", 00:21:48.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:48.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:48.825 "hdgst": ${hdgst:-false}, 00:21:48.825 "ddgst": ${ddgst:-false} 00:21:48.825 }, 00:21:48.825 "method": "bdev_nvme_attach_controller" 00:21:48.825 } 00:21:48.825 EOF 00:21:48.825 )") 00:21:48.825 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:48.825 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:48.825 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:48.825 { 00:21:48.825 "params": { 00:21:48.825 "name": "Nvme$subsystem", 00:21:48.825 "trtype": "$TEST_TRANSPORT", 00:21:48.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:48.825 "adrfam": "ipv4", 00:21:48.825 "trsvcid": "$NVMF_PORT", 00:21:48.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:48.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:48.825 "hdgst": ${hdgst:-false}, 00:21:48.825 "ddgst": ${ddgst:-false} 00:21:48.825 }, 00:21:48.825 "method": "bdev_nvme_attach_controller" 00:21:48.825 } 00:21:48.825 EOF 00:21:48.825 )") 00:21:48.825 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:48.825 EAL: No free 2048 kB hugepages reported on node 1 00:21:48.825 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:48.825 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:48.825 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:48.825 "params": { 00:21:48.825 "name": "Nvme1", 00:21:48.825 "trtype": "tcp", 00:21:48.825 "traddr": "10.0.0.2", 00:21:48.825 "adrfam": "ipv4", 00:21:48.825 "trsvcid": "4420", 00:21:48.825 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.825 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:48.825 "hdgst": false, 00:21:48.825 "ddgst": false 00:21:48.825 }, 00:21:48.825 "method": "bdev_nvme_attach_controller" 00:21:48.825 },{ 00:21:48.825 "params": { 00:21:48.825 "name": "Nvme2", 00:21:48.825 "trtype": "tcp", 00:21:48.825 "traddr": "10.0.0.2", 00:21:48.825 "adrfam": "ipv4", 00:21:48.825 "trsvcid": "4420", 00:21:48.825 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:48.825 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:48.825 "hdgst": false, 00:21:48.825 "ddgst": false 00:21:48.825 }, 00:21:48.825 "method": "bdev_nvme_attach_controller" 00:21:48.825 },{ 00:21:48.825 "params": { 00:21:48.825 "name": "Nvme3", 00:21:48.825 "trtype": "tcp", 00:21:48.825 "traddr": "10.0.0.2", 00:21:48.825 "adrfam": "ipv4", 00:21:48.825 "trsvcid": "4420", 00:21:48.825 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:48.825 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:48.825 "hdgst": false, 00:21:48.825 "ddgst": false 00:21:48.825 }, 00:21:48.825 "method": "bdev_nvme_attach_controller" 00:21:48.825 },{ 00:21:48.825 "params": { 00:21:48.825 "name": "Nvme4", 00:21:48.825 "trtype": "tcp", 00:21:48.825 "traddr": "10.0.0.2", 00:21:48.825 "adrfam": "ipv4", 00:21:48.825 "trsvcid": "4420", 00:21:48.825 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:48.825 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:48.825 "hdgst": false, 00:21:48.825 "ddgst": false 00:21:48.825 }, 00:21:48.825 "method": "bdev_nvme_attach_controller" 00:21:48.825 },{ 00:21:48.825 "params": { 00:21:48.825 "name": "Nvme5", 00:21:48.825 "trtype": "tcp", 00:21:48.825 "traddr": "10.0.0.2", 00:21:48.825 "adrfam": "ipv4", 00:21:48.825 "trsvcid": "4420", 00:21:48.825 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:48.825 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:48.825 "hdgst": false, 00:21:48.825 "ddgst": false 00:21:48.825 }, 00:21:48.826 "method": "bdev_nvme_attach_controller" 00:21:48.826 },{ 00:21:48.826 "params": { 00:21:48.826 "name": "Nvme6", 00:21:48.826 "trtype": "tcp", 00:21:48.826 "traddr": "10.0.0.2", 00:21:48.826 "adrfam": "ipv4", 00:21:48.826 "trsvcid": "4420", 00:21:48.826 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:48.826 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:48.826 "hdgst": false, 00:21:48.826 "ddgst": false 00:21:48.826 }, 00:21:48.826 "method": "bdev_nvme_attach_controller" 00:21:48.826 },{ 00:21:48.826 "params": { 00:21:48.826 "name": "Nvme7", 00:21:48.826 "trtype": "tcp", 00:21:48.826 "traddr": "10.0.0.2", 00:21:48.826 "adrfam": "ipv4", 00:21:48.826 "trsvcid": "4420", 00:21:48.826 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:48.826 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:48.826 "hdgst": false, 00:21:48.826 "ddgst": false 00:21:48.826 }, 00:21:48.826 "method": "bdev_nvme_attach_controller" 00:21:48.826 },{ 00:21:48.826 "params": { 00:21:48.826 "name": "Nvme8", 00:21:48.826 "trtype": "tcp", 00:21:48.826 "traddr": "10.0.0.2", 00:21:48.826 "adrfam": "ipv4", 00:21:48.826 "trsvcid": "4420", 00:21:48.826 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:48.826 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:48.826 "hdgst": false, 00:21:48.826 "ddgst": false 00:21:48.826 }, 00:21:48.826 "method": "bdev_nvme_attach_controller" 00:21:48.826 },{ 00:21:48.826 "params": { 00:21:48.826 "name": "Nvme9", 00:21:48.826 "trtype": "tcp", 00:21:48.826 "traddr": "10.0.0.2", 00:21:48.826 "adrfam": "ipv4", 00:21:48.826 "trsvcid": "4420", 00:21:48.826 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:48.826 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:48.826 "hdgst": false, 00:21:48.826 "ddgst": false 00:21:48.826 }, 00:21:48.826 "method": "bdev_nvme_attach_controller" 00:21:48.826 },{ 00:21:48.826 "params": { 00:21:48.826 "name": "Nvme10", 00:21:48.826 "trtype": "tcp", 00:21:48.826 "traddr": "10.0.0.2", 00:21:48.826 "adrfam": "ipv4", 00:21:48.826 "trsvcid": "4420", 00:21:48.826 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:48.826 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:48.826 "hdgst": false, 00:21:48.826 "ddgst": false 00:21:48.826 }, 00:21:48.826 "method": "bdev_nvme_attach_controller" 00:21:48.826 }' 00:21:48.826 [2024-07-24 22:09:27.780324] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.826 [2024-07-24 22:09:27.850889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:50.202 Running I/O for 1 seconds... 00:21:51.579 00:21:51.579 Latency(us) 00:21:51.579 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:51.579 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:51.579 Verification LBA range: start 0x0 length 0x400 00:21:51.579 Nvme1n1 : 1.15 276.18 17.26 0.00 0.00 228753.12 29569.84 197971.15 00:21:51.579 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:51.579 Verification LBA range: start 0x0 length 0x400 00:21:51.579 Nvme2n1 : 1.12 229.60 14.35 0.00 0.00 272038.50 18874.37 226492.42 00:21:51.579 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:51.579 Verification LBA range: start 0x0 length 0x400 00:21:51.579 Nvme3n1 : 1.12 285.20 17.82 0.00 0.00 215579.53 18140.36 221459.25 00:21:51.579 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:51.579 Verification LBA range: start 0x0 length 0x400 00:21:51.579 Nvme4n1 : 1.12 285.79 17.86 0.00 0.00 211774.67 16567.50 206359.76 00:21:51.579 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:51.579 Verification LBA range: start 0x0 length 0x400 00:21:51.579 Nvme5n1 : 1.17 274.31 17.14 0.00 0.00 217620.48 18035.51 228170.14 00:21:51.579 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:51.579 Verification LBA range: start 0x0 length 0x400 00:21:51.579 Nvme6n1 : 1.13 283.11 17.69 0.00 0.00 207196.65 18035.51 206359.76 00:21:51.579 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:51.579 Verification LBA range: start 0x0 length 0x400 00:21:51.579 Nvme7n1 : 1.11 295.49 18.47 0.00 0.00 193749.35 3958.37 203004.31 00:21:51.579 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:51.579 Verification LBA range: start 0x0 length 0x400 00:21:51.579 Nvme8n1 : 1.15 277.21 17.33 0.00 0.00 205285.46 19084.08 221459.25 00:21:51.579 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:51.579 Verification LBA range: start 0x0 length 0x400 00:21:51.579 Nvme9n1 : 1.16 330.68 20.67 0.00 0.00 169364.28 16462.64 197971.15 00:21:51.579 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:51.579 Verification LBA range: start 0x0 length 0x400 00:21:51.579 Nvme10n1 : 1.18 326.32 20.40 0.00 0.00 169219.75 8808.04 203843.17 00:21:51.579 =================================================================================================================== 00:21:51.579 Total : 2863.89 178.99 0.00 0.00 206219.44 3958.37 228170.14 00:21:51.579 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:21:51.579 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:51.579 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:51.579 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:51.579 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:51.579 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:51.579 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:21:51.579 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:51.579 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:21:51.579 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:51.579 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:51.579 rmmod nvme_tcp 00:21:51.579 rmmod nvme_fabrics 00:21:51.579 rmmod nvme_keyring 00:21:51.579 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:51.579 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:21:51.579 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:21:51.579 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2749929 ']' 00:21:51.579 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2749929 00:21:51.579 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 2749929 ']' 00:21:51.579 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 2749929 00:21:51.579 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:21:51.579 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:51.579 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2749929 00:21:51.579 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:51.579 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:51.579 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2749929' 00:21:51.579 killing process with pid 2749929 00:21:51.579 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 2749929 00:21:51.579 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 2749929 00:21:52.145 22:09:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:52.145 22:09:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:52.145 22:09:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:52.145 22:09:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:52.146 22:09:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:52.146 22:09:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.146 22:09:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:52.146 22:09:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.050 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:54.050 00:21:54.050 real 0m16.569s 00:21:54.050 user 0m34.722s 00:21:54.050 sys 0m7.032s 00:21:54.050 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:54.050 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:54.050 ************************************ 00:21:54.050 END TEST nvmf_shutdown_tc1 00:21:54.050 ************************************ 00:21:54.050 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:54.050 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:54.050 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:54.050 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:54.050 ************************************ 00:21:54.050 START TEST nvmf_shutdown_tc2 00:21:54.050 ************************************ 00:21:54.050 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:21:54.050 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:21:54.050 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:54.050 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:54.050 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:54.050 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:54.050 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:54.050 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:54.050 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.050 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:54.050 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.309 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:54.309 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:54.309 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:54.309 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:54.309 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:54.309 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:54.309 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:54.309 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:54.309 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:54.309 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:54.309 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:54.309 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:21:54.309 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:54.309 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:21:54.309 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:21:54.309 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:21:54.309 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:54.310 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:54.310 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:54.310 Found net devices under 0000:af:00.0: cvl_0_0 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:54.310 Found net devices under 0000:af:00.1: cvl_0_1 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:54.310 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:54.570 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:54.570 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:54.570 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:54.570 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:54.570 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:21:54.570 00:21:54.570 --- 10.0.0.2 ping statistics --- 00:21:54.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.570 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:21:54.570 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:54.570 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:54.570 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:21:54.570 00:21:54.570 --- 10.0.0.1 ping statistics --- 00:21:54.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.570 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:21:54.570 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:54.570 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:21:54.570 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:54.570 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:54.570 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:54.570 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:54.570 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:54.570 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:54.570 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:54.570 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:54.570 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:54.570 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:54.570 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:54.570 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2751800 00:21:54.570 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2751800 00:21:54.570 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:54.570 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2751800 ']' 00:21:54.570 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:54.570 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:54.570 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:54.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:54.570 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:54.570 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:54.570 [2024-07-24 22:09:33.670525] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:21:54.570 [2024-07-24 22:09:33.670569] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:54.570 EAL: No free 2048 kB hugepages reported on node 1 00:21:54.570 [2024-07-24 22:09:33.742041] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:54.828 [2024-07-24 22:09:33.814782] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:54.828 [2024-07-24 22:09:33.814823] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:54.828 [2024-07-24 22:09:33.814832] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:54.828 [2024-07-24 22:09:33.814840] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:54.828 [2024-07-24 22:09:33.814864] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:54.828 [2024-07-24 22:09:33.814965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:54.828 [2024-07-24 22:09:33.814995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:54.828 [2024-07-24 22:09:33.815087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:54.829 [2024-07-24 22:09:33.815089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:55.396 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:55.396 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:21:55.396 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:55.396 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:55.396 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:55.396 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:55.396 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:55.396 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.397 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:55.397 [2024-07-24 22:09:34.529085] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:55.397 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.397 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:55.397 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:55.397 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:55.397 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:55.397 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:55.397 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:55.397 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:55.397 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:55.397 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:55.397 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:55.397 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:55.397 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:55.397 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:55.397 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:55.397 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:55.397 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:55.397 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:55.397 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:55.397 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:55.397 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:55.397 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:55.397 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:55.397 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:55.397 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:55.397 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:55.397 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:55.397 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.397 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:55.656 Malloc1 00:21:55.656 [2024-07-24 22:09:34.640011] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:55.656 Malloc2 00:21:55.656 Malloc3 00:21:55.656 Malloc4 00:21:55.656 Malloc5 00:21:55.656 Malloc6 00:21:55.918 Malloc7 00:21:55.918 Malloc8 00:21:55.918 Malloc9 00:21:55.918 Malloc10 00:21:55.918 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.918 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:55.918 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:55.918 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:55.918 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2752109 00:21:55.918 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2752109 /var/tmp/bdevperf.sock 00:21:55.918 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2752109 ']' 00:21:55.918 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:55.918 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:55.918 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:55.918 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:55.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:55.919 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:55.919 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:55.919 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:21:55.919 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:55.919 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:21:55.919 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:55.919 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:55.919 { 00:21:55.919 "params": { 00:21:55.919 "name": "Nvme$subsystem", 00:21:55.919 "trtype": "$TEST_TRANSPORT", 00:21:55.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.919 "adrfam": "ipv4", 00:21:55.919 "trsvcid": "$NVMF_PORT", 00:21:55.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.919 "hdgst": ${hdgst:-false}, 00:21:55.919 "ddgst": ${ddgst:-false} 00:21:55.919 }, 00:21:55.919 "method": "bdev_nvme_attach_controller" 00:21:55.919 } 00:21:55.919 EOF 00:21:55.919 )") 00:21:55.919 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:55.919 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:55.919 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:55.919 { 00:21:55.919 "params": { 00:21:55.919 "name": "Nvme$subsystem", 00:21:55.919 "trtype": "$TEST_TRANSPORT", 00:21:55.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.919 "adrfam": "ipv4", 00:21:55.919 "trsvcid": "$NVMF_PORT", 00:21:55.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.919 "hdgst": ${hdgst:-false}, 00:21:55.919 "ddgst": ${ddgst:-false} 00:21:55.919 }, 00:21:55.919 "method": "bdev_nvme_attach_controller" 00:21:55.919 } 00:21:55.919 EOF 00:21:55.919 )") 00:21:55.919 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:55.919 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:55.919 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:55.919 { 00:21:55.919 "params": { 00:21:55.919 "name": "Nvme$subsystem", 00:21:55.919 "trtype": "$TEST_TRANSPORT", 00:21:55.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.919 "adrfam": "ipv4", 00:21:55.919 "trsvcid": "$NVMF_PORT", 00:21:55.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.919 "hdgst": ${hdgst:-false}, 00:21:55.919 "ddgst": ${ddgst:-false} 00:21:55.919 }, 00:21:55.919 "method": "bdev_nvme_attach_controller" 00:21:55.919 } 00:21:55.919 EOF 00:21:55.919 )") 00:21:55.919 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:55.919 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:55.919 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:55.919 { 00:21:55.919 "params": { 00:21:55.919 "name": "Nvme$subsystem", 00:21:55.919 "trtype": "$TEST_TRANSPORT", 00:21:55.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.919 "adrfam": "ipv4", 00:21:55.919 "trsvcid": "$NVMF_PORT", 00:21:55.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.919 "hdgst": ${hdgst:-false}, 00:21:55.919 "ddgst": ${ddgst:-false} 00:21:55.919 }, 00:21:55.919 "method": "bdev_nvme_attach_controller" 00:21:55.919 } 00:21:55.919 EOF 00:21:55.919 )") 00:21:55.919 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:55.919 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:55.919 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:55.919 { 00:21:55.919 "params": { 00:21:55.919 "name": "Nvme$subsystem", 00:21:55.919 "trtype": "$TEST_TRANSPORT", 00:21:55.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.919 "adrfam": "ipv4", 00:21:55.919 "trsvcid": "$NVMF_PORT", 00:21:55.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.919 "hdgst": ${hdgst:-false}, 00:21:55.919 "ddgst": ${ddgst:-false} 00:21:55.919 }, 00:21:55.919 "method": "bdev_nvme_attach_controller" 00:21:55.919 } 00:21:55.919 EOF 00:21:55.919 )") 00:21:55.919 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:55.919 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:55.919 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:55.919 { 00:21:55.919 "params": { 00:21:55.919 "name": "Nvme$subsystem", 00:21:55.919 "trtype": "$TEST_TRANSPORT", 00:21:55.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.919 "adrfam": "ipv4", 00:21:55.919 "trsvcid": "$NVMF_PORT", 00:21:55.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.919 "hdgst": ${hdgst:-false}, 00:21:55.919 "ddgst": ${ddgst:-false} 00:21:55.919 }, 00:21:55.919 "method": "bdev_nvme_attach_controller" 00:21:55.919 } 00:21:55.919 EOF 00:21:55.919 )") 00:21:55.919 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:55.919 [2024-07-24 22:09:35.122505] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:21:55.919 [2024-07-24 22:09:35.122561] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2752109 ] 00:21:55.919 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:55.919 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:55.919 { 00:21:55.919 "params": { 00:21:55.919 "name": "Nvme$subsystem", 00:21:55.919 "trtype": "$TEST_TRANSPORT", 00:21:55.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.919 "adrfam": "ipv4", 00:21:55.919 "trsvcid": "$NVMF_PORT", 00:21:55.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.919 "hdgst": ${hdgst:-false}, 00:21:55.919 "ddgst": ${ddgst:-false} 00:21:55.919 }, 00:21:55.919 "method": "bdev_nvme_attach_controller" 00:21:55.919 } 00:21:55.919 EOF 00:21:55.919 )") 00:21:55.919 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:56.181 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:56.181 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:56.181 { 00:21:56.181 "params": { 00:21:56.181 "name": "Nvme$subsystem", 00:21:56.181 "trtype": "$TEST_TRANSPORT", 00:21:56.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:56.181 "adrfam": "ipv4", 00:21:56.181 "trsvcid": "$NVMF_PORT", 00:21:56.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:56.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:56.181 "hdgst": ${hdgst:-false}, 00:21:56.181 "ddgst": ${ddgst:-false} 00:21:56.181 }, 00:21:56.181 "method": "bdev_nvme_attach_controller" 00:21:56.181 } 00:21:56.181 EOF 00:21:56.181 )") 00:21:56.181 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:56.181 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:56.181 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:56.181 { 00:21:56.181 "params": { 00:21:56.181 "name": "Nvme$subsystem", 00:21:56.181 "trtype": "$TEST_TRANSPORT", 00:21:56.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:56.181 "adrfam": "ipv4", 00:21:56.181 "trsvcid": "$NVMF_PORT", 00:21:56.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:56.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:56.181 "hdgst": ${hdgst:-false}, 00:21:56.181 "ddgst": ${ddgst:-false} 00:21:56.181 }, 00:21:56.181 "method": "bdev_nvme_attach_controller" 00:21:56.181 } 00:21:56.181 EOF 00:21:56.181 )") 00:21:56.181 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:56.181 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:56.181 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:56.181 { 00:21:56.181 "params": { 00:21:56.181 "name": "Nvme$subsystem", 00:21:56.181 "trtype": "$TEST_TRANSPORT", 00:21:56.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:56.181 "adrfam": "ipv4", 00:21:56.181 "trsvcid": "$NVMF_PORT", 00:21:56.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:56.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:56.181 "hdgst": ${hdgst:-false}, 00:21:56.181 "ddgst": ${ddgst:-false} 00:21:56.181 }, 00:21:56.181 "method": "bdev_nvme_attach_controller" 00:21:56.181 } 00:21:56.181 EOF 00:21:56.181 )") 00:21:56.181 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:56.181 EAL: No free 2048 kB hugepages reported on node 1 00:21:56.181 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:21:56.181 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:21:56.181 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:56.181 "params": { 00:21:56.181 "name": "Nvme1", 00:21:56.181 "trtype": "tcp", 00:21:56.181 "traddr": "10.0.0.2", 00:21:56.181 "adrfam": "ipv4", 00:21:56.181 "trsvcid": "4420", 00:21:56.181 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:56.181 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:56.181 "hdgst": false, 00:21:56.181 "ddgst": false 00:21:56.181 }, 00:21:56.181 "method": "bdev_nvme_attach_controller" 00:21:56.181 },{ 00:21:56.181 "params": { 00:21:56.181 "name": "Nvme2", 00:21:56.181 "trtype": "tcp", 00:21:56.181 "traddr": "10.0.0.2", 00:21:56.181 "adrfam": "ipv4", 00:21:56.181 "trsvcid": "4420", 00:21:56.181 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:56.181 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:56.181 "hdgst": false, 00:21:56.181 "ddgst": false 00:21:56.181 }, 00:21:56.182 "method": "bdev_nvme_attach_controller" 00:21:56.182 },{ 00:21:56.182 "params": { 00:21:56.182 "name": "Nvme3", 00:21:56.182 "trtype": "tcp", 00:21:56.182 "traddr": "10.0.0.2", 00:21:56.182 "adrfam": "ipv4", 00:21:56.182 "trsvcid": "4420", 00:21:56.182 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:56.182 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:56.182 "hdgst": false, 00:21:56.182 "ddgst": false 00:21:56.182 }, 00:21:56.182 "method": "bdev_nvme_attach_controller" 00:21:56.182 },{ 00:21:56.182 "params": { 00:21:56.182 "name": "Nvme4", 00:21:56.182 "trtype": "tcp", 00:21:56.182 "traddr": "10.0.0.2", 00:21:56.182 "adrfam": "ipv4", 00:21:56.182 "trsvcid": "4420", 00:21:56.182 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:56.182 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:56.182 "hdgst": false, 00:21:56.182 "ddgst": false 00:21:56.182 }, 00:21:56.182 "method": "bdev_nvme_attach_controller" 00:21:56.182 },{ 00:21:56.182 "params": { 00:21:56.182 "name": "Nvme5", 00:21:56.182 "trtype": "tcp", 00:21:56.182 "traddr": "10.0.0.2", 00:21:56.182 "adrfam": "ipv4", 00:21:56.182 "trsvcid": "4420", 00:21:56.182 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:56.182 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:56.182 "hdgst": false, 00:21:56.182 "ddgst": false 00:21:56.182 }, 00:21:56.182 "method": "bdev_nvme_attach_controller" 00:21:56.182 },{ 00:21:56.182 "params": { 00:21:56.182 "name": "Nvme6", 00:21:56.182 "trtype": "tcp", 00:21:56.182 "traddr": "10.0.0.2", 00:21:56.182 "adrfam": "ipv4", 00:21:56.182 "trsvcid": "4420", 00:21:56.182 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:56.182 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:56.182 "hdgst": false, 00:21:56.182 "ddgst": false 00:21:56.182 }, 00:21:56.182 "method": "bdev_nvme_attach_controller" 00:21:56.182 },{ 00:21:56.182 "params": { 00:21:56.182 "name": "Nvme7", 00:21:56.182 "trtype": "tcp", 00:21:56.182 "traddr": "10.0.0.2", 00:21:56.182 "adrfam": "ipv4", 00:21:56.182 "trsvcid": "4420", 00:21:56.182 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:56.182 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:56.182 "hdgst": false, 00:21:56.182 "ddgst": false 00:21:56.182 }, 00:21:56.182 "method": "bdev_nvme_attach_controller" 00:21:56.182 },{ 00:21:56.182 "params": { 00:21:56.182 "name": "Nvme8", 00:21:56.182 "trtype": "tcp", 00:21:56.182 "traddr": "10.0.0.2", 00:21:56.182 "adrfam": "ipv4", 00:21:56.182 "trsvcid": "4420", 00:21:56.182 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:56.182 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:56.182 "hdgst": false, 00:21:56.182 "ddgst": false 00:21:56.182 }, 00:21:56.182 "method": "bdev_nvme_attach_controller" 00:21:56.182 },{ 00:21:56.182 "params": { 00:21:56.182 "name": "Nvme9", 00:21:56.182 "trtype": "tcp", 00:21:56.182 "traddr": "10.0.0.2", 00:21:56.182 "adrfam": "ipv4", 00:21:56.182 "trsvcid": "4420", 00:21:56.182 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:56.182 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:56.182 "hdgst": false, 00:21:56.182 "ddgst": false 00:21:56.182 }, 00:21:56.182 "method": "bdev_nvme_attach_controller" 00:21:56.182 },{ 00:21:56.182 "params": { 00:21:56.182 "name": "Nvme10", 00:21:56.182 "trtype": "tcp", 00:21:56.182 "traddr": "10.0.0.2", 00:21:56.182 "adrfam": "ipv4", 00:21:56.182 "trsvcid": "4420", 00:21:56.182 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:56.182 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:56.182 "hdgst": false, 00:21:56.182 "ddgst": false 00:21:56.182 }, 00:21:56.182 "method": "bdev_nvme_attach_controller" 00:21:56.182 }' 00:21:56.182 [2024-07-24 22:09:35.195183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.182 [2024-07-24 22:09:35.263127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:57.603 Running I/O for 10 seconds... 00:21:57.603 22:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:57.603 22:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:21:57.603 22:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:57.603 22:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.603 22:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:57.603 22:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.603 22:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:57.603 22:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:57.603 22:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:57.603 22:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:21:57.603 22:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:21:57.603 22:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:57.603 22:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:57.603 22:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:57.603 22:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:57.603 22:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.603 22:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:57.603 22:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.866 22:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:21:57.866 22:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:21:57.866 22:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:57.866 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:57.866 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:57.866 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:58.125 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.125 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:58.126 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:58.126 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.126 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:21:58.126 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:21:58.126 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:58.385 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:58.385 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:58.385 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:58.385 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:58.385 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.385 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:58.385 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.385 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=195 00:21:58.385 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:21:58.386 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:21:58.386 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:21:58.386 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:21:58.386 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2752109 00:21:58.386 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 2752109 ']' 00:21:58.386 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 2752109 00:21:58.386 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:21:58.386 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:58.386 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2752109 00:21:58.386 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:58.386 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:58.386 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2752109' 00:21:58.386 killing process with pid 2752109 00:21:58.386 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 2752109 00:21:58.386 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 2752109 00:21:58.386 Received shutdown signal, test time was about 0.925329 seconds 00:21:58.386 00:21:58.386 Latency(us) 00:21:58.386 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:58.386 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.386 Verification LBA range: start 0x0 length 0x400 00:21:58.386 Nvme1n1 : 0.90 283.75 17.73 0.00 0.00 222299.14 16357.79 203004.31 00:21:58.386 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.386 Verification LBA range: start 0x0 length 0x400 00:21:58.386 Nvme2n1 : 0.90 284.06 17.75 0.00 0.00 219217.92 17616.08 203843.17 00:21:58.386 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.386 Verification LBA range: start 0x0 length 0x400 00:21:58.386 Nvme3n1 : 0.92 346.06 21.63 0.00 0.00 177078.60 18140.36 198810.01 00:21:58.386 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.386 Verification LBA range: start 0x0 length 0x400 00:21:58.386 Nvme4n1 : 0.88 290.18 18.14 0.00 0.00 207008.97 16567.50 202165.45 00:21:58.386 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.386 Verification LBA range: start 0x0 length 0x400 00:21:58.386 Nvme5n1 : 0.92 279.34 17.46 0.00 0.00 211760.95 17825.79 207198.62 00:21:58.386 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.386 Verification LBA range: start 0x0 length 0x400 00:21:58.386 Nvme6n1 : 0.91 281.75 17.61 0.00 0.00 206183.42 19608.37 205520.90 00:21:58.386 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.386 Verification LBA range: start 0x0 length 0x400 00:21:58.386 Nvme7n1 : 0.89 287.45 17.97 0.00 0.00 197829.84 16567.50 202165.45 00:21:58.386 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.386 Verification LBA range: start 0x0 length 0x400 00:21:58.386 Nvme8n1 : 0.91 280.94 17.56 0.00 0.00 199275.93 18140.36 199648.87 00:21:58.386 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.386 Verification LBA range: start 0x0 length 0x400 00:21:58.386 Nvme9n1 : 0.92 277.66 17.35 0.00 0.00 198358.22 19398.66 212231.78 00:21:58.386 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.386 Verification LBA range: start 0x0 length 0x400 00:21:58.386 Nvme10n1 : 0.92 278.58 17.41 0.00 0.00 193965.47 18350.08 231525.58 00:21:58.386 =================================================================================================================== 00:21:58.386 Total : 2889.78 180.61 0.00 0.00 202658.35 16357.79 231525.58 00:21:58.645 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:21:59.583 22:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2751800 00:21:59.583 22:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:21:59.583 22:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:59.583 22:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:59.583 22:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:59.583 22:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:59.583 22:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:59.583 22:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:21:59.583 22:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:59.583 22:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:21:59.583 22:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:59.583 22:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:59.583 rmmod nvme_tcp 00:21:59.583 rmmod nvme_fabrics 00:21:59.843 rmmod nvme_keyring 00:21:59.843 22:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:59.843 22:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:21:59.843 22:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:21:59.843 22:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2751800 ']' 00:21:59.843 22:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2751800 00:21:59.843 22:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 2751800 ']' 00:21:59.843 22:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 2751800 00:21:59.843 22:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:21:59.843 22:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:59.843 22:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2751800 00:21:59.843 22:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:59.843 22:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:59.843 22:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2751800' 00:21:59.843 killing process with pid 2751800 00:21:59.843 22:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 2751800 00:21:59.843 22:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 2751800 00:22:00.103 22:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:00.103 22:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:00.103 22:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:00.103 22:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:00.103 22:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:00.103 22:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.103 22:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:00.103 22:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:02.645 00:22:02.645 real 0m8.112s 00:22:02.645 user 0m24.250s 00:22:02.645 sys 0m1.659s 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:02.645 ************************************ 00:22:02.645 END TEST nvmf_shutdown_tc2 00:22:02.645 ************************************ 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:02.645 ************************************ 00:22:02.645 START TEST nvmf_shutdown_tc3 00:22:02.645 ************************************ 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:02.645 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:02.645 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.645 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:02.646 Found net devices under 0000:af:00.0: cvl_0_0 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:02.646 Found net devices under 0000:af:00.1: cvl_0_1 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:02.646 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:02.646 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:22:02.646 00:22:02.646 --- 10.0.0.2 ping statistics --- 00:22:02.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.646 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:02.646 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:02.646 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:22:02.646 00:22:02.646 --- 10.0.0.1 ping statistics --- 00:22:02.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.646 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2753318 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2753318 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 2753318 ']' 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:02.646 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:02.906 [2024-07-24 22:09:41.880429] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:22:02.906 [2024-07-24 22:09:41.880479] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:02.906 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.906 [2024-07-24 22:09:41.953344] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:02.906 [2024-07-24 22:09:42.026550] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:02.906 [2024-07-24 22:09:42.026587] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:02.906 [2024-07-24 22:09:42.026597] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:02.906 [2024-07-24 22:09:42.026605] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:02.906 [2024-07-24 22:09:42.026613] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:02.906 [2024-07-24 22:09:42.026652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:02.906 [2024-07-24 22:09:42.026679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:02.906 [2024-07-24 22:09:42.026789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:02.906 [2024-07-24 22:09:42.026789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:03.475 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:03.475 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:22:03.475 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:03.475 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:03.475 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:03.735 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:03.735 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:03.735 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.735 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:03.735 [2024-07-24 22:09:42.735107] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:03.735 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.735 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:03.735 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:03.735 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:03.735 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:03.735 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:03.735 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:03.735 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:03.735 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:03.735 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:03.735 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:03.735 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:03.735 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:03.735 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:03.735 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:03.735 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:03.735 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:03.735 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:03.735 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:03.735 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:03.735 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:03.735 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:03.735 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:03.735 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:03.735 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:03.735 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:03.735 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:03.735 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.735 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:03.735 Malloc1 00:22:03.735 [2024-07-24 22:09:42.845960] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:03.735 Malloc2 00:22:03.735 Malloc3 00:22:03.994 Malloc4 00:22:03.994 Malloc5 00:22:03.994 Malloc6 00:22:03.994 Malloc7 00:22:03.994 Malloc8 00:22:03.994 Malloc9 00:22:04.255 Malloc10 00:22:04.255 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.255 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:04.255 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:04.255 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:04.255 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2753630 00:22:04.255 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2753630 /var/tmp/bdevperf.sock 00:22:04.255 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 2753630 ']' 00:22:04.255 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:04.255 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:04.255 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:04.255 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:04.255 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:04.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:04.255 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:04.255 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:22:04.255 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:04.255 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:22:04.255 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:04.255 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:04.255 { 00:22:04.255 "params": { 00:22:04.255 "name": "Nvme$subsystem", 00:22:04.255 "trtype": "$TEST_TRANSPORT", 00:22:04.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.255 "adrfam": "ipv4", 00:22:04.255 "trsvcid": "$NVMF_PORT", 00:22:04.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.255 "hdgst": ${hdgst:-false}, 00:22:04.255 "ddgst": ${ddgst:-false} 00:22:04.255 }, 00:22:04.255 "method": "bdev_nvme_attach_controller" 00:22:04.255 } 00:22:04.255 EOF 00:22:04.255 )") 00:22:04.255 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:04.255 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:04.255 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:04.255 { 00:22:04.255 "params": { 00:22:04.255 "name": "Nvme$subsystem", 00:22:04.255 "trtype": "$TEST_TRANSPORT", 00:22:04.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.255 "adrfam": "ipv4", 00:22:04.255 "trsvcid": "$NVMF_PORT", 00:22:04.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.255 "hdgst": ${hdgst:-false}, 00:22:04.255 "ddgst": ${ddgst:-false} 00:22:04.255 }, 00:22:04.255 "method": "bdev_nvme_attach_controller" 00:22:04.255 } 00:22:04.255 EOF 00:22:04.255 )") 00:22:04.255 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:04.255 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:04.255 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:04.255 { 00:22:04.255 "params": { 00:22:04.255 "name": "Nvme$subsystem", 00:22:04.255 "trtype": "$TEST_TRANSPORT", 00:22:04.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.255 "adrfam": "ipv4", 00:22:04.255 "trsvcid": "$NVMF_PORT", 00:22:04.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.255 "hdgst": ${hdgst:-false}, 00:22:04.255 "ddgst": ${ddgst:-false} 00:22:04.255 }, 00:22:04.255 "method": "bdev_nvme_attach_controller" 00:22:04.255 } 00:22:04.255 EOF 00:22:04.255 )") 00:22:04.255 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:04.255 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:04.255 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:04.255 { 00:22:04.255 "params": { 00:22:04.255 "name": "Nvme$subsystem", 00:22:04.255 "trtype": "$TEST_TRANSPORT", 00:22:04.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.255 "adrfam": "ipv4", 00:22:04.255 "trsvcid": "$NVMF_PORT", 00:22:04.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.255 "hdgst": ${hdgst:-false}, 00:22:04.255 "ddgst": ${ddgst:-false} 00:22:04.255 }, 00:22:04.255 "method": "bdev_nvme_attach_controller" 00:22:04.255 } 00:22:04.255 EOF 00:22:04.255 )") 00:22:04.255 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:04.255 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:04.255 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:04.255 { 00:22:04.255 "params": { 00:22:04.255 "name": "Nvme$subsystem", 00:22:04.255 "trtype": "$TEST_TRANSPORT", 00:22:04.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.255 "adrfam": "ipv4", 00:22:04.255 "trsvcid": "$NVMF_PORT", 00:22:04.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.255 "hdgst": ${hdgst:-false}, 00:22:04.255 "ddgst": ${ddgst:-false} 00:22:04.255 }, 00:22:04.255 "method": "bdev_nvme_attach_controller" 00:22:04.255 } 00:22:04.255 EOF 00:22:04.255 )") 00:22:04.255 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:04.255 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:04.255 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:04.255 { 00:22:04.255 "params": { 00:22:04.255 "name": "Nvme$subsystem", 00:22:04.255 "trtype": "$TEST_TRANSPORT", 00:22:04.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.255 "adrfam": "ipv4", 00:22:04.255 "trsvcid": "$NVMF_PORT", 00:22:04.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.255 "hdgst": ${hdgst:-false}, 00:22:04.255 "ddgst": ${ddgst:-false} 00:22:04.255 }, 00:22:04.255 "method": "bdev_nvme_attach_controller" 00:22:04.255 } 00:22:04.255 EOF 00:22:04.255 )") 00:22:04.255 [2024-07-24 22:09:43.325097] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:22:04.255 [2024-07-24 22:09:43.325149] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2753630 ] 00:22:04.255 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:04.255 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:04.255 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:04.255 { 00:22:04.255 "params": { 00:22:04.255 "name": "Nvme$subsystem", 00:22:04.255 "trtype": "$TEST_TRANSPORT", 00:22:04.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.255 "adrfam": "ipv4", 00:22:04.255 "trsvcid": "$NVMF_PORT", 00:22:04.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.255 "hdgst": ${hdgst:-false}, 00:22:04.255 "ddgst": ${ddgst:-false} 00:22:04.255 }, 00:22:04.256 "method": "bdev_nvme_attach_controller" 00:22:04.256 } 00:22:04.256 EOF 00:22:04.256 )") 00:22:04.256 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:04.256 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:04.256 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:04.256 { 00:22:04.256 "params": { 00:22:04.256 "name": "Nvme$subsystem", 00:22:04.256 "trtype": "$TEST_TRANSPORT", 00:22:04.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.256 "adrfam": "ipv4", 00:22:04.256 "trsvcid": "$NVMF_PORT", 00:22:04.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.256 "hdgst": ${hdgst:-false}, 00:22:04.256 "ddgst": ${ddgst:-false} 00:22:04.256 }, 00:22:04.256 "method": "bdev_nvme_attach_controller" 00:22:04.256 } 00:22:04.256 EOF 00:22:04.256 )") 00:22:04.256 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:04.256 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:04.256 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:04.256 { 00:22:04.256 "params": { 00:22:04.256 "name": "Nvme$subsystem", 00:22:04.256 "trtype": "$TEST_TRANSPORT", 00:22:04.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.256 "adrfam": "ipv4", 00:22:04.256 "trsvcid": "$NVMF_PORT", 00:22:04.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.256 "hdgst": ${hdgst:-false}, 00:22:04.256 "ddgst": ${ddgst:-false} 00:22:04.256 }, 00:22:04.256 "method": "bdev_nvme_attach_controller" 00:22:04.256 } 00:22:04.256 EOF 00:22:04.256 )") 00:22:04.256 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:04.256 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:04.256 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:04.256 { 00:22:04.256 "params": { 00:22:04.256 "name": "Nvme$subsystem", 00:22:04.256 "trtype": "$TEST_TRANSPORT", 00:22:04.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.256 "adrfam": "ipv4", 00:22:04.256 "trsvcid": "$NVMF_PORT", 00:22:04.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.256 "hdgst": ${hdgst:-false}, 00:22:04.256 "ddgst": ${ddgst:-false} 00:22:04.256 }, 00:22:04.256 "method": "bdev_nvme_attach_controller" 00:22:04.256 } 00:22:04.256 EOF 00:22:04.256 )") 00:22:04.256 EAL: No free 2048 kB hugepages reported on node 1 00:22:04.256 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:04.256 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:22:04.256 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:22:04.256 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:04.256 "params": { 00:22:04.256 "name": "Nvme1", 00:22:04.256 "trtype": "tcp", 00:22:04.256 "traddr": "10.0.0.2", 00:22:04.256 "adrfam": "ipv4", 00:22:04.256 "trsvcid": "4420", 00:22:04.256 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.256 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:04.256 "hdgst": false, 00:22:04.256 "ddgst": false 00:22:04.256 }, 00:22:04.256 "method": "bdev_nvme_attach_controller" 00:22:04.256 },{ 00:22:04.256 "params": { 00:22:04.256 "name": "Nvme2", 00:22:04.256 "trtype": "tcp", 00:22:04.256 "traddr": "10.0.0.2", 00:22:04.256 "adrfam": "ipv4", 00:22:04.256 "trsvcid": "4420", 00:22:04.256 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:04.256 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:04.256 "hdgst": false, 00:22:04.256 "ddgst": false 00:22:04.256 }, 00:22:04.256 "method": "bdev_nvme_attach_controller" 00:22:04.256 },{ 00:22:04.256 "params": { 00:22:04.256 "name": "Nvme3", 00:22:04.256 "trtype": "tcp", 00:22:04.256 "traddr": "10.0.0.2", 00:22:04.256 "adrfam": "ipv4", 00:22:04.256 "trsvcid": "4420", 00:22:04.256 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:04.256 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:04.256 "hdgst": false, 00:22:04.256 "ddgst": false 00:22:04.256 }, 00:22:04.256 "method": "bdev_nvme_attach_controller" 00:22:04.256 },{ 00:22:04.256 "params": { 00:22:04.256 "name": "Nvme4", 00:22:04.256 "trtype": "tcp", 00:22:04.256 "traddr": "10.0.0.2", 00:22:04.256 "adrfam": "ipv4", 00:22:04.256 "trsvcid": "4420", 00:22:04.256 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:04.256 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:04.256 "hdgst": false, 00:22:04.256 "ddgst": false 00:22:04.256 }, 00:22:04.256 "method": "bdev_nvme_attach_controller" 00:22:04.256 },{ 00:22:04.256 "params": { 00:22:04.256 "name": "Nvme5", 00:22:04.256 "trtype": "tcp", 00:22:04.256 "traddr": "10.0.0.2", 00:22:04.256 "adrfam": "ipv4", 00:22:04.256 "trsvcid": "4420", 00:22:04.256 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:04.256 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:04.256 "hdgst": false, 00:22:04.256 "ddgst": false 00:22:04.256 }, 00:22:04.256 "method": "bdev_nvme_attach_controller" 00:22:04.256 },{ 00:22:04.256 "params": { 00:22:04.256 "name": "Nvme6", 00:22:04.256 "trtype": "tcp", 00:22:04.256 "traddr": "10.0.0.2", 00:22:04.256 "adrfam": "ipv4", 00:22:04.256 "trsvcid": "4420", 00:22:04.256 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:04.256 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:04.256 "hdgst": false, 00:22:04.256 "ddgst": false 00:22:04.256 }, 00:22:04.256 "method": "bdev_nvme_attach_controller" 00:22:04.256 },{ 00:22:04.256 "params": { 00:22:04.256 "name": "Nvme7", 00:22:04.256 "trtype": "tcp", 00:22:04.256 "traddr": "10.0.0.2", 00:22:04.256 "adrfam": "ipv4", 00:22:04.256 "trsvcid": "4420", 00:22:04.256 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:04.256 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:04.256 "hdgst": false, 00:22:04.256 "ddgst": false 00:22:04.256 }, 00:22:04.256 "method": "bdev_nvme_attach_controller" 00:22:04.256 },{ 00:22:04.256 "params": { 00:22:04.256 "name": "Nvme8", 00:22:04.256 "trtype": "tcp", 00:22:04.256 "traddr": "10.0.0.2", 00:22:04.256 "adrfam": "ipv4", 00:22:04.256 "trsvcid": "4420", 00:22:04.256 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:04.256 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:04.256 "hdgst": false, 00:22:04.256 "ddgst": false 00:22:04.256 }, 00:22:04.256 "method": "bdev_nvme_attach_controller" 00:22:04.256 },{ 00:22:04.256 "params": { 00:22:04.256 "name": "Nvme9", 00:22:04.256 "trtype": "tcp", 00:22:04.256 "traddr": "10.0.0.2", 00:22:04.256 "adrfam": "ipv4", 00:22:04.256 "trsvcid": "4420", 00:22:04.256 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:04.256 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:04.256 "hdgst": false, 00:22:04.256 "ddgst": false 00:22:04.256 }, 00:22:04.256 "method": "bdev_nvme_attach_controller" 00:22:04.256 },{ 00:22:04.256 "params": { 00:22:04.256 "name": "Nvme10", 00:22:04.256 "trtype": "tcp", 00:22:04.256 "traddr": "10.0.0.2", 00:22:04.256 "adrfam": "ipv4", 00:22:04.256 "trsvcid": "4420", 00:22:04.256 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:04.256 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:04.256 "hdgst": false, 00:22:04.256 "ddgst": false 00:22:04.256 }, 00:22:04.256 "method": "bdev_nvme_attach_controller" 00:22:04.256 }' 00:22:04.256 [2024-07-24 22:09:43.399521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:04.514 [2024-07-24 22:09:43.468785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.892 Running I/O for 10 seconds... 00:22:05.892 22:09:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:05.892 22:09:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:22:05.892 22:09:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:05.892 22:09:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.892 22:09:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:05.892 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.892 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:05.892 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:05.892 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:05.892 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:05.892 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:22:05.892 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:22:05.892 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:05.892 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:05.892 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:05.892 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.892 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:05.892 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:05.892 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.152 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:22:06.152 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:22:06.152 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:06.426 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:06.426 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:06.426 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:06.426 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:06.426 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.426 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:06.426 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.426 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:22:06.426 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:22:06.426 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:22:06.426 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:22:06.426 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:22:06.426 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2753318 00:22:06.426 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 2753318 ']' 00:22:06.426 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 2753318 00:22:06.426 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:22:06.426 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:06.426 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2753318 00:22:06.426 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:06.426 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:06.426 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2753318' 00:22:06.426 killing process with pid 2753318 00:22:06.426 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 2753318 00:22:06.426 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 2753318 00:22:06.426 [2024-07-24 22:09:45.496705] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.426 [2024-07-24 22:09:45.496767] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.426 [2024-07-24 22:09:45.496778] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.426 [2024-07-24 22:09:45.496788] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.426 [2024-07-24 22:09:45.496797] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.496806] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.496815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.496824] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.496832] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.496841] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.496850] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.496859] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.496867] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.496876] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.496885] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.496893] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.496903] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.496912] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.496920] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.496929] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.496943] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.496952] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.496960] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.496969] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.496978] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.496987] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.496996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.497005] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.497014] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.497023] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.497032] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.497041] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.497049] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.497058] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.497067] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.497076] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.497085] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.497094] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.497103] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.497111] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.497121] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.497130] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.497138] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.497147] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.497156] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.497164] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.497172] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.497181] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.497191] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.497200] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.497209] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.497218] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.497227] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.497236] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.497245] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.497253] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.497262] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.497271] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.497279] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.497288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.497296] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.497305] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.497313] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac82d0 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.498385] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc460 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.498421] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc460 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.498436] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc460 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.498449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc460 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.498462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc460 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.500462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.500489] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.500499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.500509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.500518] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.500526] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.500535] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.500549] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.500558] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.500567] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.500575] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.500584] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.500593] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.500602] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.500610] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.500619] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.500628] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.500636] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.500645] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.500653] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.427 [2024-07-24 22:09:45.500662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.428 [2024-07-24 22:09:45.500671] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.428 [2024-07-24 22:09:45.500679] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.428 [2024-07-24 22:09:45.500688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.428 [2024-07-24 22:09:45.500697] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.428 [2024-07-24 22:09:45.500706] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.428 [2024-07-24 22:09:45.500719] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.428 [2024-07-24 22:09:45.500727] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.428 [2024-07-24 22:09:45.500736] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.428 [2024-07-24 22:09:45.500745] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.428 [2024-07-24 22:09:45.500754] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.428 [2024-07-24 22:09:45.500762] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.428 [2024-07-24 22:09:45.500771] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.428 [2024-07-24 22:09:45.500780] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.428 [2024-07-24 22:09:45.500790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.428 [2024-07-24 22:09:45.500799] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.428 [2024-07-24 22:09:45.500808] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.428 [2024-07-24 22:09:45.500817] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.428 [2024-07-24 22:09:45.500825] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.428 [2024-07-24 22:09:45.500834] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.428 [2024-07-24 22:09:45.500843] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.428 [2024-07-24 22:09:45.500851] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.428 [2024-07-24 22:09:45.500860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.428 [2024-07-24 22:09:45.500868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.428 [2024-07-24 22:09:45.500877] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.428 [2024-07-24 22:09:45.500886] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.428 [2024-07-24 22:09:45.500894] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.428 [2024-07-24 22:09:45.500903] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.428 [2024-07-24 22:09:45.500911] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.428 [2024-07-24 22:09:45.500920] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.428 [2024-07-24 22:09:45.500929] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.428 [2024-07-24 22:09:45.500938] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.428 [2024-07-24 22:09:45.500947] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.428 [2024-07-24 22:09:45.500955] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.428 [2024-07-24 22:09:45.500964] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.428 [2024-07-24 22:09:45.500972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.428 [2024-07-24 22:09:45.500981] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.428 [2024-07-24 22:09:45.500989] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.428 [2024-07-24 22:09:45.500998] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.428 [2024-07-24 22:09:45.501007] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.428 [2024-07-24 22:09:45.501016] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.428 [2024-07-24 22:09:45.501024] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.428 [2024-07-24 22:09:45.501035] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.428 [2024-07-24 22:09:45.501043] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e60 is same with the state(5) to be set 00:22:06.428 [2024-07-24 22:09:45.501870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.428 [2024-07-24 22:09:45.501902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-24 22:09:45.501914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.428 [2024-07-24 22:09:45.501924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-24 22:09:45.501934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.428 [2024-07-24 22:09:45.501943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-24 22:09:45.501953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.428 [2024-07-24 22:09:45.501962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-24 22:09:45.501971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f9620 is same with the state(5) to be set 00:22:06.428 [2024-07-24 22:09:45.502053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.428 [2024-07-24 22:09:45.502064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-24 22:09:45.502074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.428 [2024-07-24 22:09:45.502084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-24 22:09:45.502094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.428 [2024-07-24 22:09:45.502103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-24 22:09:45.502112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.428 [2024-07-24 22:09:45.502121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-24 22:09:45.502131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf439a0 is same with the state(5) to be set 00:22:06.428 [2024-07-24 22:09:45.502164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.428 [2024-07-24 22:09:45.502175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-24 22:09:45.502184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.428 [2024-07-24 22:09:45.502193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-24 22:09:45.502203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.428 [2024-07-24 22:09:45.502215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-24 22:09:45.502225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.428 [2024-07-24 22:09:45.502234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-24 22:09:45.502242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34c30 is same with the state(5) to be set 00:22:06.428 [2024-07-24 22:09:45.502268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.428 [2024-07-24 22:09:45.502278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-24 22:09:45.502287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.428 [2024-07-24 22:09:45.502296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-24 22:09:45.502306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.428 [2024-07-24 22:09:45.502314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-24 22:09:45.502324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.428 [2024-07-24 22:09:45.502332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-24 22:09:45.502341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf59190 is same with the state(5) to be set 00:22:06.428 [2024-07-24 22:09:45.502676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-24 22:09:45.502698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-24 22:09:45.502723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-24 22:09:45.502733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-24 22:09:45.502744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-24 22:09:45.502753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-24 22:09:45.502764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-24 22:09:45.502772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-24 22:09:45.502783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-24 22:09:45.502792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-24 22:09:45.502803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-24 22:09:45.502812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-24 22:09:45.502822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-24 22:09:45.502834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-24 22:09:45.502845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-24 22:09:45.502854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-24 22:09:45.502864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-24 22:09:45.502874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-24 22:09:45.502884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-24 22:09:45.502893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-24 22:09:45.502904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-24 22:09:45.502913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-24 22:09:45.502923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-24 22:09:45.502933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-24 22:09:45.502944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-24 22:09:45.502953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-24 22:09:45.502963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-24 22:09:45.502972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-24 22:09:45.502983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-24 22:09:45.502992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-24 22:09:45.503003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-24 22:09:45.503012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-24 22:09:45.503022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-24 22:09:45.503032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-24 22:09:45.503042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-24 22:09:45.503051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-24 22:09:45.503062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-24 22:09:45.503072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-24 22:09:45.503084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-24 22:09:45.503084] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with t[2024-07-24 22:09:45.503093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:22:06.429 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-24 22:09:45.503108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-24 22:09:45.503109] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with the state(5) to be set 00:22:06.429 [2024-07-24 22:09:45.503117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-24 22:09:45.503120] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with the state(5) to be set 00:22:06.429 [2024-07-24 22:09:45.503128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:1[2024-07-24 22:09:45.503130] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 he state(5) to be set 00:22:06.429 [2024-07-24 22:09:45.503140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 22:09:45.503140] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 he state(5) to be set 00:22:06.429 [2024-07-24 22:09:45.503151] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with t[2024-07-24 22:09:45.503152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:1he state(5) to be set 00:22:06.429 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-24 22:09:45.503163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 22:09:45.503162] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 he state(5) to be set 00:22:06.429 [2024-07-24 22:09:45.503175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with t[2024-07-24 22:09:45.503176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:1he state(5) to be set 00:22:06.429 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-24 22:09:45.503185] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with t[2024-07-24 22:09:45.503186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:22:06.429 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-24 22:09:45.503196] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with the state(5) to be set 00:22:06.429 [2024-07-24 22:09:45.503200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-24 22:09:45.503205] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with the state(5) to be set 00:22:06.429 [2024-07-24 22:09:45.503209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-24 22:09:45.503214] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with the state(5) to be set 00:22:06.429 [2024-07-24 22:09:45.503221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-24 22:09:45.503224] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with the state(5) to be set 00:22:06.429 [2024-07-24 22:09:45.503230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-24 22:09:45.503237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with the state(5) to be set 00:22:06.429 [2024-07-24 22:09:45.503242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-24 22:09:45.503246] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with the state(5) to be set 00:22:06.429 [2024-07-24 22:09:45.503252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-24 22:09:45.503256] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with the state(5) to be set 00:22:06.429 [2024-07-24 22:09:45.503263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-24 22:09:45.503266] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with the state(5) to be set 00:22:06.429 [2024-07-24 22:09:45.503273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-24 22:09:45.503276] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with the state(5) to be set 00:22:06.429 [2024-07-24 22:09:45.503284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-24 22:09:45.503285] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with the state(5) to be set 00:22:06.429 [2024-07-24 22:09:45.503294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-24 22:09:45.503296] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with the state(5) to be set 00:22:06.429 [2024-07-24 22:09:45.503305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:1[2024-07-24 22:09:45.503306] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 he state(5) to be set 00:22:06.430 [2024-07-24 22:09:45.503317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 22:09:45.503317] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.430 he state(5) to be set 00:22:06.430 [2024-07-24 22:09:45.503328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with the state(5) to be set 00:22:06.430 [2024-07-24 22:09:45.503329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.430 [2024-07-24 22:09:45.503337] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with the state(5) to be set 00:22:06.430 [2024-07-24 22:09:45.503341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.430 [2024-07-24 22:09:45.503347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with the state(5) to be set 00:22:06.430 [2024-07-24 22:09:45.503352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.430 [2024-07-24 22:09:45.503356] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with the state(5) to be set 00:22:06.430 [2024-07-24 22:09:45.503362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.430 [2024-07-24 22:09:45.503365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with the state(5) to be set 00:22:06.430 [2024-07-24 22:09:45.503374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:1[2024-07-24 22:09:45.503375] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.430 he state(5) to be set 00:22:06.430 [2024-07-24 22:09:45.503385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 22:09:45.503386] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.430 he state(5) to be set 00:22:06.430 [2024-07-24 22:09:45.503396] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with the state(5) to be set 00:22:06.430 [2024-07-24 22:09:45.503398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.430 [2024-07-24 22:09:45.503405] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with the state(5) to be set 00:22:06.430 [2024-07-24 22:09:45.503408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.430 [2024-07-24 22:09:45.503414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with the state(5) to be set 00:22:06.430 [2024-07-24 22:09:45.503419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.430 [2024-07-24 22:09:45.503424] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with the state(5) to be set 00:22:06.430 [2024-07-24 22:09:45.503429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.430 [2024-07-24 22:09:45.503434] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with the state(5) to be set 00:22:06.430 [2024-07-24 22:09:45.503441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.430 [2024-07-24 22:09:45.503444] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with the state(5) to be set 00:22:06.430 [2024-07-24 22:09:45.503450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.430 [2024-07-24 22:09:45.503453] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with the state(5) to be set 00:22:06.430 [2024-07-24 22:09:45.503461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.430 [2024-07-24 22:09:45.503463] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with the state(5) to be set 00:22:06.430 [2024-07-24 22:09:45.503471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.430 [2024-07-24 22:09:45.503473] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with the state(5) to be set 00:22:06.430 [2024-07-24 22:09:45.503483] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with t[2024-07-24 22:09:45.503483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:1he state(5) to be set 00:22:06.430 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.430 [2024-07-24 22:09:45.503493] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with t[2024-07-24 22:09:45.503494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:22:06.430 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.430 [2024-07-24 22:09:45.503506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with the state(5) to be set 00:22:06.430 [2024-07-24 22:09:45.503509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.430 [2024-07-24 22:09:45.503516] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with the state(5) to be set 00:22:06.430 [2024-07-24 22:09:45.503519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.430 [2024-07-24 22:09:45.503525] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with the state(5) to be set 00:22:06.430 [2024-07-24 22:09:45.503531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.430 [2024-07-24 22:09:45.503535] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with the state(5) to be set 00:22:06.430 [2024-07-24 22:09:45.503541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.430 [2024-07-24 22:09:45.503544] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with the state(5) to be set 00:22:06.430 [2024-07-24 22:09:45.503552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:1[2024-07-24 22:09:45.503553] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.430 he state(5) to be set 00:22:06.430 [2024-07-24 22:09:45.503563] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with t[2024-07-24 22:09:45.503564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:22:06.430 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.430 [2024-07-24 22:09:45.503574] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with the state(5) to be set 00:22:06.430 [2024-07-24 22:09:45.503577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.430 [2024-07-24 22:09:45.503583] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with the state(5) to be set 00:22:06.430 [2024-07-24 22:09:45.503586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.430 [2024-07-24 22:09:45.503593] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with the state(5) to be set 00:22:06.430 [2024-07-24 22:09:45.503597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.430 [2024-07-24 22:09:45.503602] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with the state(5) to be set 00:22:06.430 [2024-07-24 22:09:45.503607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.430 [2024-07-24 22:09:45.503611] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with the state(5) to be set 00:22:06.430 [2024-07-24 22:09:45.503618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.430 [2024-07-24 22:09:45.503621] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with the state(5) to be set 00:22:06.430 [2024-07-24 22:09:45.503628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.430 [2024-07-24 22:09:45.503630] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with the state(5) to be set 00:22:06.430 [2024-07-24 22:09:45.503639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.430 [2024-07-24 22:09:45.503641] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with the state(5) to be set 00:22:06.430 [2024-07-24 22:09:45.503649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.430 [2024-07-24 22:09:45.503651] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with the state(5) to be set 00:22:06.430 [2024-07-24 22:09:45.503660] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with t[2024-07-24 22:09:45.503660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:1he state(5) to be set 00:22:06.430 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.430 [2024-07-24 22:09:45.503671] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with t[2024-07-24 22:09:45.503672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:22:06.430 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.430 [2024-07-24 22:09:45.503683] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with t[2024-07-24 22:09:45.503684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:1he state(5) to be set 00:22:06.430 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.430 [2024-07-24 22:09:45.503693] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with the state(5) to be set 00:22:06.430 [2024-07-24 22:09:45.503695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.430 [2024-07-24 22:09:45.503702] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with the state(5) to be set 00:22:06.430 [2024-07-24 22:09:45.503707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.430 [2024-07-24 22:09:45.503712] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with the state(5) to be set 00:22:06.430 [2024-07-24 22:09:45.503722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.430 [2024-07-24 22:09:45.503728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8340 is same with the state(5) to be set 00:22:06.430 [2024-07-24 22:09:45.503733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.430 [2024-07-24 22:09:45.503743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.431 [2024-07-24 22:09:45.503754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.431 [2024-07-24 22:09:45.503763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.431 [2024-07-24 22:09:45.503773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.431 [2024-07-24 22:09:45.503782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.431 [2024-07-24 22:09:45.503795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.431 [2024-07-24 22:09:45.503804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.431 [2024-07-24 22:09:45.503815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.431 [2024-07-24 22:09:45.503826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.431 [2024-07-24 22:09:45.503836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.431 [2024-07-24 22:09:45.503845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.431 [2024-07-24 22:09:45.503856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.431 [2024-07-24 22:09:45.503865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.431 [2024-07-24 22:09:45.503876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.431 [2024-07-24 22:09:45.503885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.431 [2024-07-24 22:09:45.503896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.431 [2024-07-24 22:09:45.503905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.431 [2024-07-24 22:09:45.503916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.431 [2024-07-24 22:09:45.503925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.431 [2024-07-24 22:09:45.503935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.431 [2024-07-24 22:09:45.503944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.431 [2024-07-24 22:09:45.503955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.431 [2024-07-24 22:09:45.503963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.431 [2024-07-24 22:09:45.503974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.431 [2024-07-24 22:09:45.503983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.431 [2024-07-24 22:09:45.503993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.431 [2024-07-24 22:09:45.504002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.431 [2024-07-24 22:09:45.504013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.431 [2024-07-24 22:09:45.504022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.431 [2024-07-24 22:09:45.504033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.431 [2024-07-24 22:09:45.504042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.431 [2024-07-24 22:09:45.504070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:06.431 [2024-07-24 22:09:45.504124] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1098240 was disconnected and freed. reset controller. 00:22:06.431 [2024-07-24 22:09:45.504860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.431 [2024-07-24 22:09:45.504880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.431 [2024-07-24 22:09:45.504895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.431 [2024-07-24 22:09:45.504904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.431 [2024-07-24 22:09:45.504915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.431 [2024-07-24 22:09:45.504925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.431 [2024-07-24 22:09:45.504936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.431 [2024-07-24 22:09:45.504945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.431 [2024-07-24 22:09:45.504956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.431 [2024-07-24 22:09:45.504965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.431 [2024-07-24 22:09:45.504976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.431 [2024-07-24 22:09:45.504985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.431 [2024-07-24 22:09:45.504996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.431 [2024-07-24 22:09:45.505005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.431 [2024-07-24 22:09:45.505016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.431 [2024-07-24 22:09:45.505025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.431 [2024-07-24 22:09:45.505035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.431 [2024-07-24 22:09:45.505044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.431 [2024-07-24 22:09:45.505054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.431 [2024-07-24 22:09:45.505063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.431 [2024-07-24 22:09:45.505074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.431 [2024-07-24 22:09:45.505083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.431 [2024-07-24 22:09:45.505093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.431 [2024-07-24 22:09:45.505102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.431 [2024-07-24 22:09:45.505113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.432 [2024-07-24 22:09:45.505124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.432 [2024-07-24 22:09:45.505135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.432 [2024-07-24 22:09:45.505144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.432 [2024-07-24 22:09:45.505155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.432 [2024-07-24 22:09:45.505164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.432 [2024-07-24 22:09:45.505175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.432 [2024-07-24 22:09:45.505184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.432 [2024-07-24 22:09:45.505195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.432 [2024-07-24 22:09:45.505203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.432 [2024-07-24 22:09:45.505214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.432 [2024-07-24 22:09:45.505222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.432 [2024-07-24 22:09:45.505233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.432 [2024-07-24 22:09:45.505242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.432 [2024-07-24 22:09:45.505253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.432 [2024-07-24 22:09:45.505262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.432 [2024-07-24 22:09:45.505272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.432 [2024-07-24 22:09:45.505281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.432 [2024-07-24 22:09:45.505292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.432 [2024-07-24 22:09:45.505290] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with t[2024-07-24 22:09:45.505301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:22:06.432 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.432 [2024-07-24 22:09:45.505314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.432 [2024-07-24 22:09:45.505316] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with the state(5) to be set 00:22:06.432 [2024-07-24 22:09:45.505323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.432 [2024-07-24 22:09:45.505334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:1[2024-07-24 22:09:45.505332] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.432 he state(5) to be set 00:22:06.432 [2024-07-24 22:09:45.505347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.432 [2024-07-24 22:09:45.505350] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with the state(5) to be set 00:22:06.432 [2024-07-24 22:09:45.505358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.432 [2024-07-24 22:09:45.505364] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with t[2024-07-24 22:09:45.505367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:22:06.432 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.432 [2024-07-24 22:09:45.505379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.432 [2024-07-24 22:09:45.505379] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with the state(5) to be set 00:22:06.432 [2024-07-24 22:09:45.505389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.432 [2024-07-24 22:09:45.505394] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with the state(5) to be set 00:22:06.432 [2024-07-24 22:09:45.505400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.432 [2024-07-24 22:09:45.505408] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with t[2024-07-24 22:09:45.505410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:22:06.432 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.432 [2024-07-24 22:09:45.505424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.432 [2024-07-24 22:09:45.505423] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with the state(5) to be set 00:22:06.432 [2024-07-24 22:09:45.505434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.432 [2024-07-24 22:09:45.505438] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with the state(5) to be set 00:22:06.432 [2024-07-24 22:09:45.505445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.432 [2024-07-24 22:09:45.505452] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with t[2024-07-24 22:09:45.505454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:22:06.432 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.432 [2024-07-24 22:09:45.505467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.432 [2024-07-24 22:09:45.505466] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with the state(5) to be set 00:22:06.432 [2024-07-24 22:09:45.505479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.432 [2024-07-24 22:09:45.505481] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with the state(5) to be set 00:22:06.432 [2024-07-24 22:09:45.505490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.432 [2024-07-24 22:09:45.505495] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with the state(5) to be set 00:22:06.432 [2024-07-24 22:09:45.505500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.432 [2024-07-24 22:09:45.505509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with t[2024-07-24 22:09:45.505512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:1he state(5) to be set 00:22:06.432 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.432 [2024-07-24 22:09:45.505523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.432 [2024-07-24 22:09:45.505524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with the state(5) to be set 00:22:06.432 [2024-07-24 22:09:45.505533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.432 [2024-07-24 22:09:45.505538] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with the state(5) to be set 00:22:06.432 [2024-07-24 22:09:45.505543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.432 [2024-07-24 22:09:45.505554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.432 [2024-07-24 22:09:45.505556] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with the state(5) to be set 00:22:06.432 [2024-07-24 22:09:45.505563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.432 [2024-07-24 22:09:45.505571] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with t[2024-07-24 22:09:45.505574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:1he state(5) to be set 00:22:06.432 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.432 [2024-07-24 22:09:45.505585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.432 [2024-07-24 22:09:45.505585] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with the state(5) to be set 00:22:06.432 [2024-07-24 22:09:45.505595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.432 [2024-07-24 22:09:45.505599] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with the state(5) to be set 00:22:06.432 [2024-07-24 22:09:45.505605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.432 [2024-07-24 22:09:45.505613] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with t[2024-07-24 22:09:45.505616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:1he state(5) to be set 00:22:06.432 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.432 [2024-07-24 22:09:45.505628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.432 [2024-07-24 22:09:45.505627] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with the state(5) to be set 00:22:06.432 [2024-07-24 22:09:45.505640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.432 [2024-07-24 22:09:45.505642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with the state(5) to be set 00:22:06.432 [2024-07-24 22:09:45.505650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.432 [2024-07-24 22:09:45.505656] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with the state(5) to be set 00:22:06.432 [2024-07-24 22:09:45.505661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.432 [2024-07-24 22:09:45.505669] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with t[2024-07-24 22:09:45.505672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:22:06.432 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.432 [2024-07-24 22:09:45.505685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.433 [2024-07-24 22:09:45.505684] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with the state(5) to be set 00:22:06.433 [2024-07-24 22:09:45.505695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.433 [2024-07-24 22:09:45.505699] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with the state(5) to be set 00:22:06.433 [2024-07-24 22:09:45.505705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.433 [2024-07-24 22:09:45.505712] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with t[2024-07-24 22:09:45.505721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:22:06.433 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.433 [2024-07-24 22:09:45.505732] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with t[2024-07-24 22:09:45.505734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:1he state(5) to be set 00:22:06.433 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.433 [2024-07-24 22:09:45.505746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.433 [2024-07-24 22:09:45.505746] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with the state(5) to be set 00:22:06.433 [2024-07-24 22:09:45.505756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.433 [2024-07-24 22:09:45.505760] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with the state(5) to be set 00:22:06.433 [2024-07-24 22:09:45.505766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.433 [2024-07-24 22:09:45.505773] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with the state(5) to be set 00:22:06.433 [2024-07-24 22:09:45.505778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.433 [2024-07-24 22:09:45.505787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.433 [2024-07-24 22:09:45.505793] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with the state(5) to be set 00:22:06.433 [2024-07-24 22:09:45.505798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.433 [2024-07-24 22:09:45.505808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 22:09:45.505806] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.433 he state(5) to be set 00:22:06.433 [2024-07-24 22:09:45.505820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.433 [2024-07-24 22:09:45.505821] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with the state(5) to be set 00:22:06.433 [2024-07-24 22:09:45.505830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.433 [2024-07-24 22:09:45.505835] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with the state(5) to be set 00:22:06.433 [2024-07-24 22:09:45.505843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.433 [2024-07-24 22:09:45.505848] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with the state(5) to be set 00:22:06.433 [2024-07-24 22:09:45.505852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.433 [2024-07-24 22:09:45.505861] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with t[2024-07-24 22:09:45.505863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:1he state(5) to be set 00:22:06.433 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.433 [2024-07-24 22:09:45.505876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.433 [2024-07-24 22:09:45.505876] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with the state(5) to be set 00:22:06.433 [2024-07-24 22:09:45.505887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.433 [2024-07-24 22:09:45.505889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with the state(5) to be set 00:22:06.433 [2024-07-24 22:09:45.505896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.433 [2024-07-24 22:09:45.505903] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with the state(5) to be set 00:22:06.433 [2024-07-24 22:09:45.505907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.433 [2024-07-24 22:09:45.505917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 22:09:45.505917] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.433 he state(5) to be set 00:22:06.433 [2024-07-24 22:09:45.505930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.433 [2024-07-24 22:09:45.505932] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with the state(5) to be set 00:22:06.433 [2024-07-24 22:09:45.505940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.433 [2024-07-24 22:09:45.505945] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with the state(5) to be set 00:22:06.433 [2024-07-24 22:09:45.505951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.433 [2024-07-24 22:09:45.505958] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with t[2024-07-24 22:09:45.505960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:22:06.433 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.433 [2024-07-24 22:09:45.505973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.433 [2024-07-24 22:09:45.505973] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with the state(5) to be set 00:22:06.433 [2024-07-24 22:09:45.505983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.433 [2024-07-24 22:09:45.505987] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with the state(5) to be set 00:22:06.433 [2024-07-24 22:09:45.505994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.433 [2024-07-24 22:09:45.506005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.433 [2024-07-24 22:09:45.506005] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with the state(5) to be set 00:22:06.433 [2024-07-24 22:09:45.506015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.433 [2024-07-24 22:09:45.506021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with the state(5) to be set 00:22:06.433 [2024-07-24 22:09:45.506025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.433 [2024-07-24 22:09:45.506033] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with t[2024-07-24 22:09:45.506036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:1he state(5) to be set 00:22:06.433 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.433 [2024-07-24 22:09:45.506047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.433 [2024-07-24 22:09:45.506048] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with the state(5) to be set 00:22:06.433 [2024-07-24 22:09:45.506058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.433 [2024-07-24 22:09:45.506062] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with the state(5) to be set 00:22:06.433 [2024-07-24 22:09:45.506067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.433 [2024-07-24 22:09:45.506075] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with t[2024-07-24 22:09:45.506079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:1he state(5) to be set 00:22:06.433 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.433 [2024-07-24 22:09:45.506090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.433 [2024-07-24 22:09:45.506091] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with the state(5) to be set 00:22:06.433 [2024-07-24 22:09:45.506100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.433 [2024-07-24 22:09:45.506104] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with the state(5) to be set 00:22:06.433 [2024-07-24 22:09:45.506110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.433 [2024-07-24 22:09:45.506117] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with t[2024-07-24 22:09:45.506121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:1he state(5) to be set 00:22:06.433 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.433 [2024-07-24 22:09:45.506133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 22:09:45.506132] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.433 he state(5) to be set 00:22:06.433 [2024-07-24 22:09:45.506145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.433 [2024-07-24 22:09:45.506146] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.506154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.434 [2024-07-24 22:09:45.506163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with t[2024-07-24 22:09:45.506166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:1he state(5) to be set 00:22:06.434 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.434 [2024-07-24 22:09:45.506178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 22:09:45.506177] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.434 he state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.506190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.434 [2024-07-24 22:09:45.506192] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.506200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.434 [2024-07-24 22:09:45.506205] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b70 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.506211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.434 [2024-07-24 22:09:45.506220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.434 [2024-07-24 22:09:45.506242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:06.434 [2024-07-24 22:09:45.506293] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf2f3b0 was disconnected and freed. reset controller. 00:22:06.434 [2024-07-24 22:09:45.507440] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9050 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508191] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508206] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508215] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508224] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508233] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508241] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508250] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508267] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508276] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508285] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508293] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508301] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508321] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508330] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508339] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508356] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508373] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508382] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508390] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508398] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508407] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508415] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508424] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508432] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508441] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508466] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508474] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508483] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508492] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508500] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508517] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508526] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508534] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508544] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508552] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508561] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508569] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508578] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508587] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508595] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508604] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508613] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508621] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508629] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508638] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508646] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508655] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508663] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508671] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508680] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508697] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508705] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508721] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508730] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.508738] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9510 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.509308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.509328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.509337] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.509347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.509359] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.509367] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.509376] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.434 [2024-07-24 22:09:45.509385] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509394] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509411] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509419] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509428] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509445] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509454] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509463] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509471] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509479] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509488] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509497] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509505] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509514] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509522] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509531] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509539] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509547] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509556] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509565] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509574] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509582] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509592] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509601] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509610] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509618] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509627] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509635] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509652] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509661] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509669] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509677] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509686] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509694] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509703] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509726] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509735] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509744] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509752] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509761] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509770] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509778] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509787] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509795] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509804] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509812] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509821] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509829] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509839] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509848] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509856] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.509864] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbfa0 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.521863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f9620 (9): Bad file descriptor 00:22:06.435 [2024-07-24 22:09:45.521926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.435 [2024-07-24 22:09:45.521939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.435 [2024-07-24 22:09:45.521950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.435 [2024-07-24 22:09:45.521959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.435 [2024-07-24 22:09:45.521969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.435 [2024-07-24 22:09:45.521978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.435 [2024-07-24 22:09:45.521988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.435 [2024-07-24 22:09:45.521997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.435 [2024-07-24 22:09:45.522006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf61340 is same with the state(5) to be set 00:22:06.435 [2024-07-24 22:09:45.522030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.435 [2024-07-24 22:09:45.522041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.435 [2024-07-24 22:09:45.522050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.435 [2024-07-24 22:09:45.522059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.435 [2024-07-24 22:09:45.522069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.435 [2024-07-24 22:09:45.522078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.437 [2024-07-24 22:09:45.522088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.437 [2024-07-24 22:09:45.522097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.437 [2024-07-24 22:09:45.522106] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2d90 is same with the state(5) to be set 00:22:06.437 [2024-07-24 22:09:45.522133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.437 [2024-07-24 22:09:45.522143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.437 [2024-07-24 22:09:45.522153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.437 [2024-07-24 22:09:45.522168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.437 [2024-07-24 22:09:45.522178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.437 [2024-07-24 22:09:45.522187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.437 [2024-07-24 22:09:45.522196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.437 [2024-07-24 22:09:45.522205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.437 [2024-07-24 22:09:45.522214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100dbd0 is same with the state(5) to be set 00:22:06.437 [2024-07-24 22:09:45.522241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.437 [2024-07-24 22:09:45.522252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.437 [2024-07-24 22:09:45.522261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.437 [2024-07-24 22:09:45.522271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.437 [2024-07-24 22:09:45.522280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.437 [2024-07-24 22:09:45.522289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.437 [2024-07-24 22:09:45.522299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.437 [2024-07-24 22:09:45.522307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.437 [2024-07-24 22:09:45.522316] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa37610 is same with the state(5) to be set 00:22:06.437 [2024-07-24 22:09:45.522342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.437 [2024-07-24 22:09:45.522353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.437 [2024-07-24 22:09:45.522362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.437 [2024-07-24 22:09:45.522371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.437 [2024-07-24 22:09:45.522381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.437 [2024-07-24 22:09:45.522390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.437 [2024-07-24 22:09:45.522399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.437 [2024-07-24 22:09:45.522408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.437 [2024-07-24 22:09:45.522417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a460 is same with the state(5) to be set 00:22:06.437 [2024-07-24 22:09:45.522436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf439a0 (9): Bad file descriptor 00:22:06.437 [2024-07-24 22:09:45.522465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.437 [2024-07-24 22:09:45.522476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.437 [2024-07-24 22:09:45.522485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.437 [2024-07-24 22:09:45.522494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.437 [2024-07-24 22:09:45.522503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.437 [2024-07-24 22:09:45.522512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.437 [2024-07-24 22:09:45.522522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.437 [2024-07-24 22:09:45.522531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.437 [2024-07-24 22:09:45.522540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf57420 is same with the state(5) to be set 00:22:06.437 [2024-07-24 22:09:45.522557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf34c30 (9): Bad file descriptor 00:22:06.437 [2024-07-24 22:09:45.522574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf59190 (9): Bad file descriptor 00:22:06.437 [2024-07-24 22:09:45.524554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.437 [2024-07-24 22:09:45.524577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.437 [2024-07-24 22:09:45.524593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.437 [2024-07-24 22:09:45.524603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.437 [2024-07-24 22:09:45.524614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.437 [2024-07-24 22:09:45.524623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.437 [2024-07-24 22:09:45.524635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.437 [2024-07-24 22:09:45.524644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.437 [2024-07-24 22:09:45.524654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.437 [2024-07-24 22:09:45.524663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.437 [2024-07-24 22:09:45.524674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.437 [2024-07-24 22:09:45.524683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.437 [2024-07-24 22:09:45.524693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.437 [2024-07-24 22:09:45.524702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.437 [2024-07-24 22:09:45.524713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.437 [2024-07-24 22:09:45.524733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.437 [2024-07-24 22:09:45.524744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.437 [2024-07-24 22:09:45.524753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.437 [2024-07-24 22:09:45.524764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.437 [2024-07-24 22:09:45.524773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.437 [2024-07-24 22:09:45.524783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.437 [2024-07-24 22:09:45.524792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.437 [2024-07-24 22:09:45.524802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.437 [2024-07-24 22:09:45.524811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.437 [2024-07-24 22:09:45.524822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.438 [2024-07-24 22:09:45.524831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.438 [2024-07-24 22:09:45.524841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.438 [2024-07-24 22:09:45.524850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.438 [2024-07-24 22:09:45.524861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.438 [2024-07-24 22:09:45.524870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.438 [2024-07-24 22:09:45.524881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.438 [2024-07-24 22:09:45.524890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.438 [2024-07-24 22:09:45.524900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.438 [2024-07-24 22:09:45.524909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.438 [2024-07-24 22:09:45.524920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.438 [2024-07-24 22:09:45.524929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.438 [2024-07-24 22:09:45.524940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.438 [2024-07-24 22:09:45.524949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.438 [2024-07-24 22:09:45.524959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.438 [2024-07-24 22:09:45.524968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.438 [2024-07-24 22:09:45.524980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.438 [2024-07-24 22:09:45.524989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.438 [2024-07-24 22:09:45.525000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.438 [2024-07-24 22:09:45.525008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.438 [2024-07-24 22:09:45.525019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.438 [2024-07-24 22:09:45.525028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.438 [2024-07-24 22:09:45.525038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.438 [2024-07-24 22:09:45.525047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.438 [2024-07-24 22:09:45.525058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.438 [2024-07-24 22:09:45.525067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.438 [2024-07-24 22:09:45.525077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.438 [2024-07-24 22:09:45.525086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.438 [2024-07-24 22:09:45.525097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.438 [2024-07-24 22:09:45.525106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.438 [2024-07-24 22:09:45.525116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.438 [2024-07-24 22:09:45.525125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.438 [2024-07-24 22:09:45.525136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.438 [2024-07-24 22:09:45.525146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.438 [2024-07-24 22:09:45.525157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.438 [2024-07-24 22:09:45.525165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.438 [2024-07-24 22:09:45.525176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.438 [2024-07-24 22:09:45.525185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.438 [2024-07-24 22:09:45.525195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.438 [2024-07-24 22:09:45.525204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.438 [2024-07-24 22:09:45.525215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.438 [2024-07-24 22:09:45.525226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.438 [2024-07-24 22:09:45.525236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.438 [2024-07-24 22:09:45.525245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.438 [2024-07-24 22:09:45.525256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.438 [2024-07-24 22:09:45.525265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.438 [2024-07-24 22:09:45.525275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.438 [2024-07-24 22:09:45.525284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.438 [2024-07-24 22:09:45.525295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.438 [2024-07-24 22:09:45.525304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.438 [2024-07-24 22:09:45.525314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.438 [2024-07-24 22:09:45.525324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.438 [2024-07-24 22:09:45.525335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.438 [2024-07-24 22:09:45.525344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.438 [2024-07-24 22:09:45.525355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.438 [2024-07-24 22:09:45.525364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.438 [2024-07-24 22:09:45.525374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.438 [2024-07-24 22:09:45.525383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.438 [2024-07-24 22:09:45.525394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.438 [2024-07-24 22:09:45.525403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.438 [2024-07-24 22:09:45.525413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.438 [2024-07-24 22:09:45.525422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.438 [2024-07-24 22:09:45.525433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.438 [2024-07-24 22:09:45.525442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.438 [2024-07-24 22:09:45.525452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.438 [2024-07-24 22:09:45.525461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.438 [2024-07-24 22:09:45.525473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.438 [2024-07-24 22:09:45.525482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.438 [2024-07-24 22:09:45.525493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.438 [2024-07-24 22:09:45.525502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.438 [2024-07-24 22:09:45.525512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.438 [2024-07-24 22:09:45.525521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.438 [2024-07-24 22:09:45.525531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.438 [2024-07-24 22:09:45.525541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.438 [2024-07-24 22:09:45.525551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.438 [2024-07-24 22:09:45.525560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.438 [2024-07-24 22:09:45.525571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.438 [2024-07-24 22:09:45.525580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.438 [2024-07-24 22:09:45.525590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.439 [2024-07-24 22:09:45.525599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.439 [2024-07-24 22:09:45.525609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.439 [2024-07-24 22:09:45.525618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.439 [2024-07-24 22:09:45.525629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.439 [2024-07-24 22:09:45.525638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.439 [2024-07-24 22:09:45.525648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.439 [2024-07-24 22:09:45.525657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.439 [2024-07-24 22:09:45.525668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.439 [2024-07-24 22:09:45.525677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.439 [2024-07-24 22:09:45.525688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.439 [2024-07-24 22:09:45.525696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.439 [2024-07-24 22:09:45.525707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.439 [2024-07-24 22:09:45.525721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.439 [2024-07-24 22:09:45.525732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.439 [2024-07-24 22:09:45.525741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.439 [2024-07-24 22:09:45.525752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.439 [2024-07-24 22:09:45.525761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.439 [2024-07-24 22:09:45.525771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.439 [2024-07-24 22:09:45.525781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.439 [2024-07-24 22:09:45.525791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.439 [2024-07-24 22:09:45.525801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.439 [2024-07-24 22:09:45.525811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.439 [2024-07-24 22:09:45.525820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.439 [2024-07-24 22:09:45.525831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.439 [2024-07-24 22:09:45.525840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.439 [2024-07-24 22:09:45.525914] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1862310 was disconnected and freed. reset controller. 00:22:06.439 [2024-07-24 22:09:45.526054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.439 [2024-07-24 22:09:45.526067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.439 [2024-07-24 22:09:45.526081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.439 [2024-07-24 22:09:45.526090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.439 [2024-07-24 22:09:45.526101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.439 [2024-07-24 22:09:45.526110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.439 [2024-07-24 22:09:45.526121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.439 [2024-07-24 22:09:45.526130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.439 [2024-07-24 22:09:45.526141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.439 [2024-07-24 22:09:45.526150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.439 [2024-07-24 22:09:45.526160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.439 [2024-07-24 22:09:45.526172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.439 [2024-07-24 22:09:45.526183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.439 [2024-07-24 22:09:45.526192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.439 [2024-07-24 22:09:45.526202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.439 [2024-07-24 22:09:45.526211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.439 [2024-07-24 22:09:45.526221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.439 [2024-07-24 22:09:45.526230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.439 [2024-07-24 22:09:45.526242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.439 [2024-07-24 22:09:45.526251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.439 [2024-07-24 22:09:45.526261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.439 [2024-07-24 22:09:45.526270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.439 [2024-07-24 22:09:45.526281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.439 [2024-07-24 22:09:45.526290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.439 [2024-07-24 22:09:45.526300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.439 [2024-07-24 22:09:45.526309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.439 [2024-07-24 22:09:45.526320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.439 [2024-07-24 22:09:45.526329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.439 [2024-07-24 22:09:45.526339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.439 [2024-07-24 22:09:45.526349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.439 [2024-07-24 22:09:45.526360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.439 [2024-07-24 22:09:45.526368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.439 [2024-07-24 22:09:45.526379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.439 [2024-07-24 22:09:45.526388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.439 [2024-07-24 22:09:45.526399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.439 [2024-07-24 22:09:45.526408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.439 [2024-07-24 22:09:45.526420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.439 [2024-07-24 22:09:45.526429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.439 [2024-07-24 22:09:45.526440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.439 [2024-07-24 22:09:45.526449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.439 [2024-07-24 22:09:45.526459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.439 [2024-07-24 22:09:45.526468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.439 [2024-07-24 22:09:45.526479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.439 [2024-07-24 22:09:45.526487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.439 [2024-07-24 22:09:45.526498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.439 [2024-07-24 22:09:45.526507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.439 [2024-07-24 22:09:45.526517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.439 [2024-07-24 22:09:45.526526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.440 [2024-07-24 22:09:45.526537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.440 [2024-07-24 22:09:45.526546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.440 [2024-07-24 22:09:45.526556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.440 [2024-07-24 22:09:45.526565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.440 [2024-07-24 22:09:45.526576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.440 [2024-07-24 22:09:45.526584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.440 [2024-07-24 22:09:45.526595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.440 [2024-07-24 22:09:45.526604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.440 [2024-07-24 22:09:45.526615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.440 [2024-07-24 22:09:45.526624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.440 [2024-07-24 22:09:45.526634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.440 [2024-07-24 22:09:45.526643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.440 [2024-07-24 22:09:45.526654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.440 [2024-07-24 22:09:45.526664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.440 [2024-07-24 22:09:45.526675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.440 [2024-07-24 22:09:45.526684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.440 [2024-07-24 22:09:45.526695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.440 [2024-07-24 22:09:45.526704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.440 [2024-07-24 22:09:45.526720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.440 [2024-07-24 22:09:45.526729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.440 [2024-07-24 22:09:45.526740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.440 [2024-07-24 22:09:45.526749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.440 [2024-07-24 22:09:45.526759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.440 [2024-07-24 22:09:45.526768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.440 [2024-07-24 22:09:45.526779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.440 [2024-07-24 22:09:45.526788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.440 [2024-07-24 22:09:45.526798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.440 [2024-07-24 22:09:45.526807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.440 [2024-07-24 22:09:45.526818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.440 [2024-07-24 22:09:45.526827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.440 [2024-07-24 22:09:45.526838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.440 [2024-07-24 22:09:45.526847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.440 [2024-07-24 22:09:45.526858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.440 [2024-07-24 22:09:45.526866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.440 [2024-07-24 22:09:45.526877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.440 [2024-07-24 22:09:45.526886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.440 [2024-07-24 22:09:45.526896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.440 [2024-07-24 22:09:45.526905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.440 [2024-07-24 22:09:45.526917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.440 [2024-07-24 22:09:45.526927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.440 [2024-07-24 22:09:45.526938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.440 [2024-07-24 22:09:45.526947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.440 [2024-07-24 22:09:45.526958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.440 [2024-07-24 22:09:45.526967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.440 [2024-07-24 22:09:45.526977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.440 [2024-07-24 22:09:45.526986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.440 [2024-07-24 22:09:45.526996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.440 [2024-07-24 22:09:45.527005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.440 [2024-07-24 22:09:45.527016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.440 [2024-07-24 22:09:45.527025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.440 [2024-07-24 22:09:45.527035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.440 [2024-07-24 22:09:45.527044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.440 [2024-07-24 22:09:45.527054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.440 [2024-07-24 22:09:45.527063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.440 [2024-07-24 22:09:45.527074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.440 [2024-07-24 22:09:45.527083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.440 [2024-07-24 22:09:45.527093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.440 [2024-07-24 22:09:45.527102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.440 [2024-07-24 22:09:45.527112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.440 [2024-07-24 22:09:45.527121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.440 [2024-07-24 22:09:45.527132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.440 [2024-07-24 22:09:45.527141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.440 [2024-07-24 22:09:45.527151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.440 [2024-07-24 22:09:45.527161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.440 [2024-07-24 22:09:45.527172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.440 [2024-07-24 22:09:45.527181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.440 [2024-07-24 22:09:45.527191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.440 [2024-07-24 22:09:45.527201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.440 [2024-07-24 22:09:45.527211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.440 [2024-07-24 22:09:45.527220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.440 [2024-07-24 22:09:45.527231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.440 [2024-07-24 22:09:45.527240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.440 [2024-07-24 22:09:45.527251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.440 [2024-07-24 22:09:45.527260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.440 [2024-07-24 22:09:45.527270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.440 [2024-07-24 22:09:45.527279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.440 [2024-07-24 22:09:45.527290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.440 [2024-07-24 22:09:45.527299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.440 [2024-07-24 22:09:45.527309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.441 [2024-07-24 22:09:45.527318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.441 [2024-07-24 22:09:45.528067] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xfedec0 was disconnected and freed. reset controller. 00:22:06.441 [2024-07-24 22:09:45.528088] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:06.441 [2024-07-24 22:09:45.528104] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:06.441 [2024-07-24 22:09:45.528119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf61340 (9): Bad file descriptor 00:22:06.441 [2024-07-24 22:09:45.530460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.441 [2024-07-24 22:09:45.530485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf59190 with addr=10.0.0.2, port=4420 00:22:06.441 [2024-07-24 22:09:45.530496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf59190 is same with the state(5) to be set 00:22:06.441 [2024-07-24 22:09:45.531276] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:06.441 [2024-07-24 22:09:45.531300] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:06.441 [2024-07-24 22:09:45.531327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa37610 (9): Bad file descriptor 00:22:06.441 [2024-07-24 22:09:45.531518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.441 [2024-07-24 22:09:45.531533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf61340 with addr=10.0.0.2, port=4420 00:22:06.441 [2024-07-24 22:09:45.531543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf61340 is same with the state(5) to be set 00:22:06.441 [2024-07-24 22:09:45.531554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf59190 (9): Bad file descriptor 00:22:06.441 [2024-07-24 22:09:45.531601] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:06.441 [2024-07-24 22:09:45.531659] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:06.441 [2024-07-24 22:09:45.531721] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:06.441 [2024-07-24 22:09:45.532544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.441 [2024-07-24 22:09:45.532565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f9620 with addr=10.0.0.2, port=4420 00:22:06.441 [2024-07-24 22:09:45.532576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f9620 is same with the state(5) to be set 00:22:06.441 [2024-07-24 22:09:45.532598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf61340 (9): Bad file descriptor 00:22:06.441 [2024-07-24 22:09:45.532610] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:06.441 [2024-07-24 22:09:45.532619] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:06.441 [2024-07-24 22:09:45.532630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:06.441 [2024-07-24 22:09:45.532648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f2d90 (9): Bad file descriptor 00:22:06.441 [2024-07-24 22:09:45.532668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100dbd0 (9): Bad file descriptor 00:22:06.441 [2024-07-24 22:09:45.532688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x101a460 (9): Bad file descriptor 00:22:06.441 [2024-07-24 22:09:45.532713] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf57420 (9): Bad file descriptor 00:22:06.441 [2024-07-24 22:09:45.532839] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:06.441 [2024-07-24 22:09:45.532899] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:06.441 [2024-07-24 22:09:45.532948] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:06.441 [2024-07-24 22:09:45.532967] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.441 [2024-07-24 22:09:45.533141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.441 [2024-07-24 22:09:45.533155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa37610 with addr=10.0.0.2, port=4420 00:22:06.441 [2024-07-24 22:09:45.533166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa37610 is same with the state(5) to be set 00:22:06.441 [2024-07-24 22:09:45.533177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f9620 (9): Bad file descriptor 00:22:06.441 [2024-07-24 22:09:45.533188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:06.441 [2024-07-24 22:09:45.533197] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:06.441 [2024-07-24 22:09:45.533206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:06.441 [2024-07-24 22:09:45.533262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.441 [2024-07-24 22:09:45.533277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.441 [2024-07-24 22:09:45.533291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.441 [2024-07-24 22:09:45.533301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.441 [2024-07-24 22:09:45.533312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.441 [2024-07-24 22:09:45.533320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.441 [2024-07-24 22:09:45.533331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.441 [2024-07-24 22:09:45.533340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.441 [2024-07-24 22:09:45.533351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.441 [2024-07-24 22:09:45.533360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.441 [2024-07-24 22:09:45.533370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.441 [2024-07-24 22:09:45.533379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.441 [2024-07-24 22:09:45.533390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.441 [2024-07-24 22:09:45.533399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.441 [2024-07-24 22:09:45.533409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.441 [2024-07-24 22:09:45.533418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.441 [2024-07-24 22:09:45.533429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.441 [2024-07-24 22:09:45.533437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.441 [2024-07-24 22:09:45.533448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.441 [2024-07-24 22:09:45.533457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.441 [2024-07-24 22:09:45.533467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.441 [2024-07-24 22:09:45.533477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.441 [2024-07-24 22:09:45.533487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.441 [2024-07-24 22:09:45.533496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.441 [2024-07-24 22:09:45.533506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.441 [2024-07-24 22:09:45.533515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.441 [2024-07-24 22:09:45.533527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.441 [2024-07-24 22:09:45.533536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.441 [2024-07-24 22:09:45.533547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.441 [2024-07-24 22:09:45.533556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.441 [2024-07-24 22:09:45.533567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.441 [2024-07-24 22:09:45.533575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.442 [2024-07-24 22:09:45.533586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.442 [2024-07-24 22:09:45.533595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.442 [2024-07-24 22:09:45.533605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.442 [2024-07-24 22:09:45.533614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.442 [2024-07-24 22:09:45.533625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.442 [2024-07-24 22:09:45.533634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.442 [2024-07-24 22:09:45.533644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.442 [2024-07-24 22:09:45.533653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.442 [2024-07-24 22:09:45.533664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.442 [2024-07-24 22:09:45.533673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.442 [2024-07-24 22:09:45.533684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.442 [2024-07-24 22:09:45.533692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.442 [2024-07-24 22:09:45.533703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.442 [2024-07-24 22:09:45.533712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.442 [2024-07-24 22:09:45.533729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.442 [2024-07-24 22:09:45.533738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.442 [2024-07-24 22:09:45.533760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.442 [2024-07-24 22:09:45.533769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.442 [2024-07-24 22:09:45.533780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.442 [2024-07-24 22:09:45.533791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.442 [2024-07-24 22:09:45.533802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.442 [2024-07-24 22:09:45.533811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.442 [2024-07-24 22:09:45.533822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.442 [2024-07-24 22:09:45.533831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.442 [2024-07-24 22:09:45.533841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.442 [2024-07-24 22:09:45.533850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.442 [2024-07-24 22:09:45.533860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.442 [2024-07-24 22:09:45.533869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.442 [2024-07-24 22:09:45.533880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.442 [2024-07-24 22:09:45.533889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.442 [2024-07-24 22:09:45.533899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.442 [2024-07-24 22:09:45.533908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.442 [2024-07-24 22:09:45.533919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.442 [2024-07-24 22:09:45.533928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.442 [2024-07-24 22:09:45.533938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.442 [2024-07-24 22:09:45.533947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.442 [2024-07-24 22:09:45.533958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.442 [2024-07-24 22:09:45.533967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.442 [2024-07-24 22:09:45.533977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.442 [2024-07-24 22:09:45.533986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.442 [2024-07-24 22:09:45.533997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.442 [2024-07-24 22:09:45.534006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.442 [2024-07-24 22:09:45.534016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.442 [2024-07-24 22:09:45.534025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.442 [2024-07-24 22:09:45.534037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.442 [2024-07-24 22:09:45.534046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.442 [2024-07-24 22:09:45.534056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.442 [2024-07-24 22:09:45.534065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.442 [2024-07-24 22:09:45.534075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.442 [2024-07-24 22:09:45.534084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.442 [2024-07-24 22:09:45.534095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.442 [2024-07-24 22:09:45.534103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.442 [2024-07-24 22:09:45.534114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.442 [2024-07-24 22:09:45.534123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.442 [2024-07-24 22:09:45.534134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.442 [2024-07-24 22:09:45.534143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.442 [2024-07-24 22:09:45.534154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.442 [2024-07-24 22:09:45.534162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.442 [2024-07-24 22:09:45.534173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.442 [2024-07-24 22:09:45.534182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.442 [2024-07-24 22:09:45.534192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.442 [2024-07-24 22:09:45.534201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.442 [2024-07-24 22:09:45.534211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.442 [2024-07-24 22:09:45.534220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.442 [2024-07-24 22:09:45.534231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.442 [2024-07-24 22:09:45.534240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.442 [2024-07-24 22:09:45.534250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.442 [2024-07-24 22:09:45.534259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.442 [2024-07-24 22:09:45.534269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.442 [2024-07-24 22:09:45.534280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.442 [2024-07-24 22:09:45.534290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.442 [2024-07-24 22:09:45.534299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.442 [2024-07-24 22:09:45.534309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.442 [2024-07-24 22:09:45.534318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.442 [2024-07-24 22:09:45.534328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.442 [2024-07-24 22:09:45.534337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.442 [2024-07-24 22:09:45.534347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.442 [2024-07-24 22:09:45.534356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.442 [2024-07-24 22:09:45.534366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.443 [2024-07-24 22:09:45.534375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.443 [2024-07-24 22:09:45.534386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.443 [2024-07-24 22:09:45.534395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.443 [2024-07-24 22:09:45.534406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.443 [2024-07-24 22:09:45.534414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.443 [2024-07-24 22:09:45.534425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.443 [2024-07-24 22:09:45.534434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.443 [2024-07-24 22:09:45.534445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.443 [2024-07-24 22:09:45.534454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.443 [2024-07-24 22:09:45.534465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.443 [2024-07-24 22:09:45.534473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.443 [2024-07-24 22:09:45.534484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.443 [2024-07-24 22:09:45.534493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.443 [2024-07-24 22:09:45.534503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.443 [2024-07-24 22:09:45.534512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.443 [2024-07-24 22:09:45.534524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.443 [2024-07-24 22:09:45.534533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.443 [2024-07-24 22:09:45.534543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff4970 is same with the state(5) to be set 00:22:06.443 [2024-07-24 22:09:45.535516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.443 [2024-07-24 22:09:45.535530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.443 [2024-07-24 22:09:45.535543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.443 [2024-07-24 22:09:45.535552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.443 [2024-07-24 22:09:45.535563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.443 [2024-07-24 22:09:45.535572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.443 [2024-07-24 22:09:45.535583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.443 [2024-07-24 22:09:45.535592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.443 [2024-07-24 22:09:45.535603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.443 [2024-07-24 22:09:45.535612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.443 [2024-07-24 22:09:45.535623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.443 [2024-07-24 22:09:45.535632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.443 [2024-07-24 22:09:45.535643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.443 [2024-07-24 22:09:45.535652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.443 [2024-07-24 22:09:45.535662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.443 [2024-07-24 22:09:45.535671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.443 [2024-07-24 22:09:45.535682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.443 [2024-07-24 22:09:45.535690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.443 [2024-07-24 22:09:45.535701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.443 [2024-07-24 22:09:45.535710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.443 [2024-07-24 22:09:45.535730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.443 [2024-07-24 22:09:45.535739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.443 [2024-07-24 22:09:45.535752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.443 [2024-07-24 22:09:45.535761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.443 [2024-07-24 22:09:45.535772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.443 [2024-07-24 22:09:45.535781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.443 [2024-07-24 22:09:45.535791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.443 [2024-07-24 22:09:45.535800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.443 [2024-07-24 22:09:45.535811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.443 [2024-07-24 22:09:45.535820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.443 [2024-07-24 22:09:45.535831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.443 [2024-07-24 22:09:45.535840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.443 [2024-07-24 22:09:45.535850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.443 [2024-07-24 22:09:45.535859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.443 [2024-07-24 22:09:45.535870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.443 [2024-07-24 22:09:45.535878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.443 [2024-07-24 22:09:45.535889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.443 [2024-07-24 22:09:45.535898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.443 [2024-07-24 22:09:45.535908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.443 [2024-07-24 22:09:45.535917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.443 [2024-07-24 22:09:45.535927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.443 [2024-07-24 22:09:45.535936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.443 [2024-07-24 22:09:45.535946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.443 [2024-07-24 22:09:45.535955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.443 [2024-07-24 22:09:45.535966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.443 [2024-07-24 22:09:45.535974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.443 [2024-07-24 22:09:45.535985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.443 [2024-07-24 22:09:45.535996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.443 [2024-07-24 22:09:45.536007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.443 [2024-07-24 22:09:45.536015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.443 [2024-07-24 22:09:45.536026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.443 [2024-07-24 22:09:45.536035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.443 [2024-07-24 22:09:45.536045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.443 [2024-07-24 22:09:45.536054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.443 [2024-07-24 22:09:45.536065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.443 [2024-07-24 22:09:45.536074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.443 [2024-07-24 22:09:45.536084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.443 [2024-07-24 22:09:45.536093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.443 [2024-07-24 22:09:45.536104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.443 [2024-07-24 22:09:45.536112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.443 [2024-07-24 22:09:45.536123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.444 [2024-07-24 22:09:45.536132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.444 [2024-07-24 22:09:45.536142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.444 [2024-07-24 22:09:45.536151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.444 [2024-07-24 22:09:45.536161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.444 [2024-07-24 22:09:45.536170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.444 [2024-07-24 22:09:45.536181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.444 [2024-07-24 22:09:45.536189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.444 [2024-07-24 22:09:45.536200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.444 [2024-07-24 22:09:45.536209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.444 [2024-07-24 22:09:45.536219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.444 [2024-07-24 22:09:45.536228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.444 [2024-07-24 22:09:45.536238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.444 [2024-07-24 22:09:45.536249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.444 [2024-07-24 22:09:45.536260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.444 [2024-07-24 22:09:45.536269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.444 [2024-07-24 22:09:45.536279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.444 [2024-07-24 22:09:45.536288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.444 [2024-07-24 22:09:45.536299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.444 [2024-07-24 22:09:45.536308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.444 [2024-07-24 22:09:45.536318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.444 [2024-07-24 22:09:45.536327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.444 [2024-07-24 22:09:45.536337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.444 [2024-07-24 22:09:45.536346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.444 [2024-07-24 22:09:45.536357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.444 [2024-07-24 22:09:45.536366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.444 [2024-07-24 22:09:45.536376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.444 [2024-07-24 22:09:45.536385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.444 [2024-07-24 22:09:45.536396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.444 [2024-07-24 22:09:45.536405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.444 [2024-07-24 22:09:45.536416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.444 [2024-07-24 22:09:45.536424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.444 [2024-07-24 22:09:45.536435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.444 [2024-07-24 22:09:45.536444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.444 [2024-07-24 22:09:45.536454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.444 [2024-07-24 22:09:45.536463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.444 [2024-07-24 22:09:45.536474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.444 [2024-07-24 22:09:45.536483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.444 [2024-07-24 22:09:45.536494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.444 [2024-07-24 22:09:45.536503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.444 [2024-07-24 22:09:45.536514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.444 [2024-07-24 22:09:45.536523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.444 [2024-07-24 22:09:45.536533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.444 [2024-07-24 22:09:45.536542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.444 [2024-07-24 22:09:45.536552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.444 [2024-07-24 22:09:45.536561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.444 [2024-07-24 22:09:45.536572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.444 [2024-07-24 22:09:45.536580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.444 [2024-07-24 22:09:45.536591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.444 [2024-07-24 22:09:45.536600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.444 [2024-07-24 22:09:45.536610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.444 [2024-07-24 22:09:45.536619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.444 [2024-07-24 22:09:45.536629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.444 [2024-07-24 22:09:45.536638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.444 [2024-07-24 22:09:45.536648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.444 [2024-07-24 22:09:45.536657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.444 [2024-07-24 22:09:45.536668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.444 [2024-07-24 22:09:45.536677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.444 [2024-07-24 22:09:45.536688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.444 [2024-07-24 22:09:45.536696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.444 [2024-07-24 22:09:45.536707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.444 [2024-07-24 22:09:45.536720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.444 [2024-07-24 22:09:45.536731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.444 [2024-07-24 22:09:45.536741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.444 [2024-07-24 22:09:45.536752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.444 [2024-07-24 22:09:45.536761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.444 [2024-07-24 22:09:45.536771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.444 [2024-07-24 22:09:45.536780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.444 [2024-07-24 22:09:45.536790] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10996f0 is same with the state(5) to be set 00:22:06.444 [2024-07-24 22:09:45.537781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.444 [2024-07-24 22:09:45.537796] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:06.444 [2024-07-24 22:09:45.537807] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:06.444 [2024-07-24 22:09:45.537834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa37610 (9): Bad file descriptor 00:22:06.444 [2024-07-24 22:09:45.537846] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:06.444 [2024-07-24 22:09:45.537854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:06.444 [2024-07-24 22:09:45.537864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:06.444 [2024-07-24 22:09:45.537921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.444 [2024-07-24 22:09:45.538196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.444 [2024-07-24 22:09:45.538210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf34c30 with addr=10.0.0.2, port=4420 00:22:06.444 [2024-07-24 22:09:45.538220] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34c30 is same with the state(5) to be set 00:22:06.444 [2024-07-24 22:09:45.538483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.444 [2024-07-24 22:09:45.538495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf439a0 with addr=10.0.0.2, port=4420 00:22:06.445 [2024-07-24 22:09:45.538504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf439a0 is same with the state(5) to be set 00:22:06.445 [2024-07-24 22:09:45.538513] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:06.445 [2024-07-24 22:09:45.538522] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:06.445 [2024-07-24 22:09:45.538530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:06.445 [2024-07-24 22:09:45.538993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.445 [2024-07-24 22:09:45.539005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf34c30 (9): Bad file descriptor 00:22:06.445 [2024-07-24 22:09:45.539016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf439a0 (9): Bad file descriptor 00:22:06.445 [2024-07-24 22:09:45.539066] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:06.445 [2024-07-24 22:09:45.539076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:06.445 [2024-07-24 22:09:45.539088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:06.445 [2024-07-24 22:09:45.539099] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:06.445 [2024-07-24 22:09:45.539107] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:06.445 [2024-07-24 22:09:45.539116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:06.445 [2024-07-24 22:09:45.539158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:06.445 [2024-07-24 22:09:45.539170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.445 [2024-07-24 22:09:45.539177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.445 [2024-07-24 22:09:45.539437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.445 [2024-07-24 22:09:45.539451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf59190 with addr=10.0.0.2, port=4420 00:22:06.445 [2024-07-24 22:09:45.539460] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf59190 is same with the state(5) to be set 00:22:06.445 [2024-07-24 22:09:45.539489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf59190 (9): Bad file descriptor 00:22:06.445 [2024-07-24 22:09:45.539518] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:06.445 [2024-07-24 22:09:45.539527] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:06.445 [2024-07-24 22:09:45.539535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:06.445 [2024-07-24 22:09:45.539564] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.445 [2024-07-24 22:09:45.540585] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:06.445 [2024-07-24 22:09:45.540787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.445 [2024-07-24 22:09:45.540801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf61340 with addr=10.0.0.2, port=4420 00:22:06.445 [2024-07-24 22:09:45.540810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf61340 is same with the state(5) to be set 00:22:06.445 [2024-07-24 22:09:45.540840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf61340 (9): Bad file descriptor 00:22:06.445 [2024-07-24 22:09:45.540869] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:06.445 [2024-07-24 22:09:45.540878] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:06.445 [2024-07-24 22:09:45.540887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:06.445 [2024-07-24 22:09:45.540916] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.445 [2024-07-24 22:09:45.541404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:06.445 [2024-07-24 22:09:45.541665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.445 [2024-07-24 22:09:45.541679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f9620 with addr=10.0.0.2, port=4420 00:22:06.445 [2024-07-24 22:09:45.541688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f9620 is same with the state(5) to be set 00:22:06.445 [2024-07-24 22:09:45.541721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f9620 (9): Bad file descriptor 00:22:06.445 [2024-07-24 22:09:45.541751] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:06.445 [2024-07-24 22:09:45.541761] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:06.445 [2024-07-24 22:09:45.541773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:06.445 [2024-07-24 22:09:45.541802] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.445 [2024-07-24 22:09:45.542427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.445 [2024-07-24 22:09:45.542440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.445 [2024-07-24 22:09:45.542455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.445 [2024-07-24 22:09:45.542464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.445 [2024-07-24 22:09:45.542475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.445 [2024-07-24 22:09:45.542484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.445 [2024-07-24 22:09:45.542495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.445 [2024-07-24 22:09:45.542504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.445 [2024-07-24 22:09:45.542514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.445 [2024-07-24 22:09:45.542524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.445 [2024-07-24 22:09:45.542534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.445 [2024-07-24 22:09:45.542543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.445 [2024-07-24 22:09:45.542554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.445 [2024-07-24 22:09:45.542563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.445 [2024-07-24 22:09:45.542573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.445 [2024-07-24 22:09:45.542582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.445 [2024-07-24 22:09:45.542593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.445 [2024-07-24 22:09:45.542602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.445 [2024-07-24 22:09:45.542613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.445 [2024-07-24 22:09:45.542622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.445 [2024-07-24 22:09:45.542632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.445 [2024-07-24 22:09:45.542641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.445 [2024-07-24 22:09:45.542652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.445 [2024-07-24 22:09:45.542661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.445 [2024-07-24 22:09:45.542676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.445 [2024-07-24 22:09:45.542686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.445 [2024-07-24 22:09:45.542696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.445 [2024-07-24 22:09:45.542705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.445 [2024-07-24 22:09:45.542720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.445 [2024-07-24 22:09:45.542730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.445 [2024-07-24 22:09:45.542741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.445 [2024-07-24 22:09:45.542749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.445 [2024-07-24 22:09:45.542760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.445 [2024-07-24 22:09:45.542769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.445 [2024-07-24 22:09:45.542780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.445 [2024-07-24 22:09:45.542789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.445 [2024-07-24 22:09:45.542799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.445 [2024-07-24 22:09:45.542808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.445 [2024-07-24 22:09:45.542819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.445 [2024-07-24 22:09:45.542828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.445 [2024-07-24 22:09:45.542838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.445 [2024-07-24 22:09:45.542847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.445 [2024-07-24 22:09:45.542858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-07-24 22:09:45.542867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-07-24 22:09:45.542878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-07-24 22:09:45.542887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-07-24 22:09:45.542897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-07-24 22:09:45.542906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-07-24 22:09:45.542917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-07-24 22:09:45.542927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-07-24 22:09:45.542938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-07-24 22:09:45.542947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-07-24 22:09:45.542958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-07-24 22:09:45.542967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-07-24 22:09:45.542977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-07-24 22:09:45.542986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-07-24 22:09:45.542997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-07-24 22:09:45.543006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-07-24 22:09:45.543016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-07-24 22:09:45.543025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-07-24 22:09:45.543036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-07-24 22:09:45.543045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-07-24 22:09:45.543055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-07-24 22:09:45.543064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-07-24 22:09:45.543075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-07-24 22:09:45.543083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-07-24 22:09:45.543094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-07-24 22:09:45.543103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-07-24 22:09:45.543113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-07-24 22:09:45.543122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-07-24 22:09:45.543132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-07-24 22:09:45.543141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-07-24 22:09:45.543151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-07-24 22:09:45.543160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-07-24 22:09:45.543172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-07-24 22:09:45.543181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-07-24 22:09:45.543192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-07-24 22:09:45.543201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-07-24 22:09:45.543212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-07-24 22:09:45.543220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-07-24 22:09:45.543231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-07-24 22:09:45.543240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-07-24 22:09:45.543250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-07-24 22:09:45.543259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-07-24 22:09:45.543270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-07-24 22:09:45.543279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-07-24 22:09:45.543289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-07-24 22:09:45.543298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-07-24 22:09:45.543308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-07-24 22:09:45.543319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-07-24 22:09:45.543330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-07-24 22:09:45.543339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-07-24 22:09:45.543349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-07-24 22:09:45.543358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-07-24 22:09:45.543368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-07-24 22:09:45.543377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-07-24 22:09:45.543388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-07-24 22:09:45.543397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-07-24 22:09:45.543408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-07-24 22:09:45.543418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-07-24 22:09:45.543429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-07-24 22:09:45.543437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-07-24 22:09:45.543448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-07-24 22:09:45.543457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-07-24 22:09:45.543468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-07-24 22:09:45.543477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-07-24 22:09:45.543487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-07-24 22:09:45.543496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-07-24 22:09:45.543507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-07-24 22:09:45.543515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-07-24 22:09:45.543526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-07-24 22:09:45.543535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-07-24 22:09:45.543545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-07-24 22:09:45.543554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-07-24 22:09:45.543565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-07-24 22:09:45.543574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-07-24 22:09:45.543585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-07-24 22:09:45.543594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-07-24 22:09:45.543604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-07-24 22:09:45.543613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-07-24 22:09:45.543623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-07-24 22:09:45.543633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-07-24 22:09:45.543643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-07-24 22:09:45.543652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-07-24 22:09:45.543664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-07-24 22:09:45.543673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-07-24 22:09:45.543684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-07-24 22:09:45.543693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-07-24 22:09:45.543702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2df90 is same with the state(5) to be set 00:22:06.447 [2024-07-24 22:09:45.544661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-07-24 22:09:45.544676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-07-24 22:09:45.544688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-07-24 22:09:45.544697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-07-24 22:09:45.544709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-07-24 22:09:45.544723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-07-24 22:09:45.544734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-07-24 22:09:45.544743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-07-24 22:09:45.544754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-07-24 22:09:45.544763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-07-24 22:09:45.544773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-07-24 22:09:45.544782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-07-24 22:09:45.544793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-07-24 22:09:45.544802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-07-24 22:09:45.544812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-07-24 22:09:45.544822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-07-24 22:09:45.544832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-07-24 22:09:45.544841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-07-24 22:09:45.544852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-07-24 22:09:45.544861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-07-24 22:09:45.544873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-07-24 22:09:45.544882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-07-24 22:09:45.544894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-07-24 22:09:45.544903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-07-24 22:09:45.544914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-07-24 22:09:45.544923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-07-24 22:09:45.544934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-07-24 22:09:45.544943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-07-24 22:09:45.544954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-07-24 22:09:45.544963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-07-24 22:09:45.544974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-07-24 22:09:45.544983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-07-24 22:09:45.544994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-07-24 22:09:45.545003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-07-24 22:09:45.545014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-07-24 22:09:45.545023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-07-24 22:09:45.545033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-07-24 22:09:45.545042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-07-24 22:09:45.545052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-07-24 22:09:45.545061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-07-24 22:09:45.545072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-07-24 22:09:45.545081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-07-24 22:09:45.545092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-07-24 22:09:45.545101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-07-24 22:09:45.545111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-07-24 22:09:45.545122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-07-24 22:09:45.545133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-07-24 22:09:45.545142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-07-24 22:09:45.545153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-07-24 22:09:45.545162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-07-24 22:09:45.545172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-07-24 22:09:45.545182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-07-24 22:09:45.545192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-07-24 22:09:45.545201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-07-24 22:09:45.545212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-07-24 22:09:45.545221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-07-24 22:09:45.545232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-07-24 22:09:45.545241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-07-24 22:09:45.545251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-07-24 22:09:45.545260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-07-24 22:09:45.545271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-07-24 22:09:45.545279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-07-24 22:09:45.545290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-07-24 22:09:45.545299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-07-24 22:09:45.545309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-07-24 22:09:45.545318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-07-24 22:09:45.545328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-07-24 22:09:45.545337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-07-24 22:09:45.545348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-07-24 22:09:45.545356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-07-24 22:09:45.545368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-07-24 22:09:45.545377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-07-24 22:09:45.545388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-07-24 22:09:45.545397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-07-24 22:09:45.545408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-07-24 22:09:45.545417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-07-24 22:09:45.545427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-07-24 22:09:45.545436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-07-24 22:09:45.545446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-07-24 22:09:45.545455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-07-24 22:09:45.545466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-07-24 22:09:45.545475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-07-24 22:09:45.545485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-07-24 22:09:45.545494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-07-24 22:09:45.545505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-07-24 22:09:45.545514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-07-24 22:09:45.545524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-07-24 22:09:45.545533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-07-24 22:09:45.545544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-07-24 22:09:45.545553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-07-24 22:09:45.545564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-07-24 22:09:45.545573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-07-24 22:09:45.545583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-07-24 22:09:45.545592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-07-24 22:09:45.545603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-07-24 22:09:45.545613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-07-24 22:09:45.545623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-07-24 22:09:45.545632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-07-24 22:09:45.545643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-07-24 22:09:45.545652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-07-24 22:09:45.545662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-07-24 22:09:45.545671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-07-24 22:09:45.545681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-07-24 22:09:45.545690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-07-24 22:09:45.545701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-07-24 22:09:45.545710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-07-24 22:09:45.545724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-07-24 22:09:45.545733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-07-24 22:09:45.545744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-07-24 22:09:45.545753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-07-24 22:09:45.545763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-07-24 22:09:45.545772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-07-24 22:09:45.545782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-07-24 22:09:45.545791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-07-24 22:09:45.545802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-07-24 22:09:45.545811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-07-24 22:09:45.545821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-07-24 22:09:45.545830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-07-24 22:09:45.545841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-07-24 22:09:45.545851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-07-24 22:09:45.545863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-07-24 22:09:45.545872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-07-24 22:09:45.545882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-07-24 22:09:45.545891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-07-24 22:09:45.545902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-07-24 22:09:45.545911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-07-24 22:09:45.545922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-07-24 22:09:45.545931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-07-24 22:09:45.545941] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf308a0 is same with the state(5) to be set 00:22:06.448 [2024-07-24 22:09:45.546906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-07-24 22:09:45.546921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-07-24 22:09:45.546933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-07-24 22:09:45.546942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-07-24 22:09:45.546953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-07-24 22:09:45.546962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-07-24 22:09:45.546973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-07-24 22:09:45.546982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-07-24 22:09:45.546993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-07-24 22:09:45.547002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.449 [2024-07-24 22:09:45.547012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.449 [2024-07-24 22:09:45.547021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.449 [2024-07-24 22:09:45.547032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.449 [2024-07-24 22:09:45.547041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.449 [2024-07-24 22:09:45.547052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.449 [2024-07-24 22:09:45.547061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.449 [2024-07-24 22:09:45.547074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.449 [2024-07-24 22:09:45.547083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.449 [2024-07-24 22:09:45.547094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.449 [2024-07-24 22:09:45.547103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.449 [2024-07-24 22:09:45.547113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.449 [2024-07-24 22:09:45.547122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.449 [2024-07-24 22:09:45.547133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.449 [2024-07-24 22:09:45.547142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.449 [2024-07-24 22:09:45.547152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.449 [2024-07-24 22:09:45.547162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.449 [2024-07-24 22:09:45.547172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.449 [2024-07-24 22:09:45.547181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.449 [2024-07-24 22:09:45.547192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.449 [2024-07-24 22:09:45.547201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.449 [2024-07-24 22:09:45.547211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.449 [2024-07-24 22:09:45.547220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.449 [2024-07-24 22:09:45.547230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.449 [2024-07-24 22:09:45.547239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.449 [2024-07-24 22:09:45.547250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.449 [2024-07-24 22:09:45.547259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.449 [2024-07-24 22:09:45.547269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.449 [2024-07-24 22:09:45.547278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.449 [2024-07-24 22:09:45.547289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.449 [2024-07-24 22:09:45.547298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.449 [2024-07-24 22:09:45.547308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.449 [2024-07-24 22:09:45.547319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.449 [2024-07-24 22:09:45.547330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.449 [2024-07-24 22:09:45.547339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.449 [2024-07-24 22:09:45.547350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.449 [2024-07-24 22:09:45.547358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.449 [2024-07-24 22:09:45.547369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.449 [2024-07-24 22:09:45.547378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.449 [2024-07-24 22:09:45.547388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.449 [2024-07-24 22:09:45.547397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.449 [2024-07-24 22:09:45.547408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.449 [2024-07-24 22:09:45.547417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.449 [2024-07-24 22:09:45.547427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.449 [2024-07-24 22:09:45.547436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.449 [2024-07-24 22:09:45.547447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.449 [2024-07-24 22:09:45.547456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.449 [2024-07-24 22:09:45.547466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.449 [2024-07-24 22:09:45.547475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.449 [2024-07-24 22:09:45.547486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.449 [2024-07-24 22:09:45.547495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.449 [2024-07-24 22:09:45.547505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.449 [2024-07-24 22:09:45.547515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.449 [2024-07-24 22:09:45.547525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.449 [2024-07-24 22:09:45.547534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.449 [2024-07-24 22:09:45.547545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.449 [2024-07-24 22:09:45.547554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.449 [2024-07-24 22:09:45.547565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.449 [2024-07-24 22:09:45.547575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.449 [2024-07-24 22:09:45.547585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.449 [2024-07-24 22:09:45.547594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.449 [2024-07-24 22:09:45.547605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.449 [2024-07-24 22:09:45.547614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.449 [2024-07-24 22:09:45.547625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.449 [2024-07-24 22:09:45.547634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.449 [2024-07-24 22:09:45.547644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.449 [2024-07-24 22:09:45.547653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.449 [2024-07-24 22:09:45.547664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.449 [2024-07-24 22:09:45.547673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.449 [2024-07-24 22:09:45.547684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.449 [2024-07-24 22:09:45.547693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.449 [2024-07-24 22:09:45.547703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.449 [2024-07-24 22:09:45.547712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.449 [2024-07-24 22:09:45.547726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.449 [2024-07-24 22:09:45.547735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.449 [2024-07-24 22:09:45.547746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.449 [2024-07-24 22:09:45.547755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.449 [2024-07-24 22:09:45.547765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.449 [2024-07-24 22:09:45.547776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.449 [2024-07-24 22:09:45.547786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.450 [2024-07-24 22:09:45.547795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.450 [2024-07-24 22:09:45.547806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.450 [2024-07-24 22:09:45.547817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.450 [2024-07-24 22:09:45.547828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.450 [2024-07-24 22:09:45.547837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.450 [2024-07-24 22:09:45.547848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.450 [2024-07-24 22:09:45.547857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.450 [2024-07-24 22:09:45.547867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.450 [2024-07-24 22:09:45.547876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.450 [2024-07-24 22:09:45.547887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.450 [2024-07-24 22:09:45.547896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.450 [2024-07-24 22:09:45.547906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.450 [2024-07-24 22:09:45.547915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.450 [2024-07-24 22:09:45.547926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.450 [2024-07-24 22:09:45.547935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.450 [2024-07-24 22:09:45.547946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.450 [2024-07-24 22:09:45.547954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.450 [2024-07-24 22:09:45.547965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.450 [2024-07-24 22:09:45.547974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.450 [2024-07-24 22:09:45.547985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.450 [2024-07-24 22:09:45.547993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.450 [2024-07-24 22:09:45.548004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.450 [2024-07-24 22:09:45.548013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.450 [2024-07-24 22:09:45.548023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.450 [2024-07-24 22:09:45.548032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.450 [2024-07-24 22:09:45.548042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.450 [2024-07-24 22:09:45.548051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.450 [2024-07-24 22:09:45.548062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.450 [2024-07-24 22:09:45.548072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.450 [2024-07-24 22:09:45.548082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.450 [2024-07-24 22:09:45.548092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.450 [2024-07-24 22:09:45.548102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.450 [2024-07-24 22:09:45.548111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.450 [2024-07-24 22:09:45.548121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.450 [2024-07-24 22:09:45.548130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.450 [2024-07-24 22:09:45.548140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.450 [2024-07-24 22:09:45.548149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.450 [2024-07-24 22:09:45.548159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.450 [2024-07-24 22:09:45.548169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.450 [2024-07-24 22:09:45.548178] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a09d50 is same with the state(5) to be set 00:22:06.450 [2024-07-24 22:09:45.549144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.450 [2024-07-24 22:09:45.549160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.450 [2024-07-24 22:09:45.549172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.450 [2024-07-24 22:09:45.549183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.450 [2024-07-24 22:09:45.549195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.450 [2024-07-24 22:09:45.549204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.450 [2024-07-24 22:09:45.549215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.450 [2024-07-24 22:09:45.549225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.450 [2024-07-24 22:09:45.549236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.450 [2024-07-24 22:09:45.549245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.450 [2024-07-24 22:09:45.549255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.450 [2024-07-24 22:09:45.549264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.450 [2024-07-24 22:09:45.549275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.450 [2024-07-24 22:09:45.549288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.450 [2024-07-24 22:09:45.549299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.450 [2024-07-24 22:09:45.549308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.450 [2024-07-24 22:09:45.549318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.450 [2024-07-24 22:09:45.549328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.450 [2024-07-24 22:09:45.549339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.450 [2024-07-24 22:09:45.549348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.450 [2024-07-24 22:09:45.549358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.450 [2024-07-24 22:09:45.549367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.450 [2024-07-24 22:09:45.549378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.450 [2024-07-24 22:09:45.549387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.450 [2024-07-24 22:09:45.549397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.451 [2024-07-24 22:09:45.549406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.451 [2024-07-24 22:09:45.549417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.451 [2024-07-24 22:09:45.549426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.451 [2024-07-24 22:09:45.549437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.451 [2024-07-24 22:09:45.549446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.451 [2024-07-24 22:09:45.549456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.451 [2024-07-24 22:09:45.549465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.451 [2024-07-24 22:09:45.549476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.451 [2024-07-24 22:09:45.549485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.451 [2024-07-24 22:09:45.549495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.451 [2024-07-24 22:09:45.549504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.451 [2024-07-24 22:09:45.549515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.451 [2024-07-24 22:09:45.549525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.451 [2024-07-24 22:09:45.549537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.451 [2024-07-24 22:09:45.549546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.451 [2024-07-24 22:09:45.549556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.451 [2024-07-24 22:09:45.549565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.451 [2024-07-24 22:09:45.549575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.451 [2024-07-24 22:09:45.549585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.451 [2024-07-24 22:09:45.549595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.451 [2024-07-24 22:09:45.549604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.451 [2024-07-24 22:09:45.549615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.451 [2024-07-24 22:09:45.549624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.451 [2024-07-24 22:09:45.549635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.451 [2024-07-24 22:09:45.549644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.451 [2024-07-24 22:09:45.549654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.451 [2024-07-24 22:09:45.549663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.451 [2024-07-24 22:09:45.549674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.451 [2024-07-24 22:09:45.549682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.451 [2024-07-24 22:09:45.549693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.451 [2024-07-24 22:09:45.549702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.451 [2024-07-24 22:09:45.549718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.451 [2024-07-24 22:09:45.549728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.451 [2024-07-24 22:09:45.549738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.451 [2024-07-24 22:09:45.549747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.451 [2024-07-24 22:09:45.549758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.451 [2024-07-24 22:09:45.549767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.451 [2024-07-24 22:09:45.549777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.451 [2024-07-24 22:09:45.549788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.451 [2024-07-24 22:09:45.549798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.451 [2024-07-24 22:09:45.549811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.451 [2024-07-24 22:09:45.549823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.451 [2024-07-24 22:09:45.549832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.451 [2024-07-24 22:09:45.549843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.451 [2024-07-24 22:09:45.549854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.451 [2024-07-24 22:09:45.549866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.451 [2024-07-24 22:09:45.549877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.451 [2024-07-24 22:09:45.549888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.451 [2024-07-24 22:09:45.549897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.451 [2024-07-24 22:09:45.549908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.451 [2024-07-24 22:09:45.549918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.451 [2024-07-24 22:09:45.549928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.451 [2024-07-24 22:09:45.549937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.451 [2024-07-24 22:09:45.549949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.451 [2024-07-24 22:09:45.549957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.451 [2024-07-24 22:09:45.549968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.451 [2024-07-24 22:09:45.549977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.451 [2024-07-24 22:09:45.549988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.451 [2024-07-24 22:09:45.549997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.451 [2024-07-24 22:09:45.550007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.451 [2024-07-24 22:09:45.550016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.451 [2024-07-24 22:09:45.550031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.451 [2024-07-24 22:09:45.550040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.451 [2024-07-24 22:09:45.550053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.451 [2024-07-24 22:09:45.550062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.451 [2024-07-24 22:09:45.550073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.451 [2024-07-24 22:09:45.550082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.451 [2024-07-24 22:09:45.550093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.451 [2024-07-24 22:09:45.550102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.451 [2024-07-24 22:09:45.550112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.451 [2024-07-24 22:09:45.550121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.451 [2024-07-24 22:09:45.550132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.451 [2024-07-24 22:09:45.550141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.451 [2024-07-24 22:09:45.550152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.451 [2024-07-24 22:09:45.550162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.451 [2024-07-24 22:09:45.550172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.451 [2024-07-24 22:09:45.550184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.451 [2024-07-24 22:09:45.550195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.451 [2024-07-24 22:09:45.550204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.452 [2024-07-24 22:09:45.550215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.452 [2024-07-24 22:09:45.550224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.452 [2024-07-24 22:09:45.550235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.452 [2024-07-24 22:09:45.550244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.452 [2024-07-24 22:09:45.550254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.452 [2024-07-24 22:09:45.550264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.452 [2024-07-24 22:09:45.550275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.452 [2024-07-24 22:09:45.550284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.452 [2024-07-24 22:09:45.550294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.452 [2024-07-24 22:09:45.550305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.452 [2024-07-24 22:09:45.550316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.452 [2024-07-24 22:09:45.550325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.452 [2024-07-24 22:09:45.550336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.452 [2024-07-24 22:09:45.550345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.452 [2024-07-24 22:09:45.550356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.452 [2024-07-24 22:09:45.550365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.452 [2024-07-24 22:09:45.550376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.452 [2024-07-24 22:09:45.550386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.452 [2024-07-24 22:09:45.550397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.452 [2024-07-24 22:09:45.550406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.452 [2024-07-24 22:09:45.550417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.452 [2024-07-24 22:09:45.550425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.452 [2024-07-24 22:09:45.550437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.452 [2024-07-24 22:09:45.550447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.452 [2024-07-24 22:09:45.550457] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfec9e0 is same with the state(5) to be set 00:22:06.452 [2024-07-24 22:09:45.552046] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:06.452 [2024-07-24 22:09:45.552066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:06.452 [2024-07-24 22:09:45.552079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:06.452 task offset: 18176 on job bdev=Nvme2n1 fails 00:22:06.452 00:22:06.452 Latency(us) 00:22:06.452 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:06.452 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.452 Job: Nvme1n1 ended in about 0.62 seconds with error 00:22:06.452 Verification LBA range: start 0x0 length 0x400 00:22:06.452 Nvme1n1 : 0.62 205.20 12.82 102.60 0.00 205163.72 29989.27 184549.38 00:22:06.452 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.452 Job: Nvme2n1 ended in about 0.61 seconds with error 00:22:06.452 Verification LBA range: start 0x0 length 0x400 00:22:06.452 Nvme2n1 : 0.61 209.19 13.07 104.59 0.00 196268.58 20132.66 176999.63 00:22:06.452 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.452 Job: Nvme3n1 ended in about 0.63 seconds with error 00:22:06.452 Verification LBA range: start 0x0 length 0x400 00:22:06.452 Nvme3n1 : 0.63 204.47 12.78 102.23 0.00 195973.94 17406.36 203004.31 00:22:06.452 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.452 Job: Nvme4n1 ended in about 0.63 seconds with error 00:22:06.452 Verification LBA range: start 0x0 length 0x400 00:22:06.452 Nvme4n1 : 0.63 202.24 12.64 101.12 0.00 193317.00 20866.66 199648.87 00:22:06.452 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.452 Job: Nvme5n1 ended in about 0.61 seconds with error 00:22:06.452 Verification LBA range: start 0x0 length 0x400 00:22:06.452 Nvme5n1 : 0.61 208.87 13.05 104.43 0.00 181663.61 36490.44 212231.78 00:22:06.452 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.452 Job: Nvme6n1 ended in about 0.64 seconds with error 00:22:06.452 Verification LBA range: start 0x0 length 0x400 00:22:06.452 Nvme6n1 : 0.64 201.53 12.60 100.76 0.00 184088.71 18350.08 187065.96 00:22:06.452 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.452 Job: Nvme7n1 ended in about 0.62 seconds with error 00:22:06.452 Verification LBA range: start 0x0 length 0x400 00:22:06.452 Nvme7n1 : 0.62 207.26 12.95 103.63 0.00 173319.78 26843.55 209715.20 00:22:06.452 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.452 Job: Nvme8n1 ended in about 0.64 seconds with error 00:22:06.452 Verification LBA range: start 0x0 length 0x400 00:22:06.452 Nvme8n1 : 0.64 200.82 12.55 100.41 0.00 174971.84 15414.07 203004.31 00:22:06.452 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.452 Job: Nvme9n1 ended in about 0.64 seconds with error 00:22:06.452 Verification LBA range: start 0x0 length 0x400 00:22:06.452 Nvme9n1 : 0.64 100.05 6.25 100.05 0.00 256255.59 35651.58 231525.58 00:22:06.452 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.452 Job: Nvme10n1 ended in about 0.62 seconds with error 00:22:06.452 Verification LBA range: start 0x0 length 0x400 00:22:06.452 Nvme10n1 : 0.62 206.95 12.93 103.47 0.00 158806.97 7654.60 223136.97 00:22:06.452 =================================================================================================================== 00:22:06.452 Total : 1946.57 121.66 1023.31 0.00 189766.68 7654.60 231525.58 00:22:06.452 [2024-07-24 22:09:45.574053] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:06.452 [2024-07-24 22:09:45.574091] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:06.452 [2024-07-24 22:09:45.574645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.452 [2024-07-24 22:09:45.574668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf57420 with addr=10.0.0.2, port=4420 00:22:06.452 [2024-07-24 22:09:45.574680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf57420 is same with the state(5) to be set 00:22:06.452 [2024-07-24 22:09:45.574991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.452 [2024-07-24 22:09:45.575005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100dbd0 with addr=10.0.0.2, port=4420 00:22:06.452 [2024-07-24 22:09:45.575014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100dbd0 is same with the state(5) to be set 00:22:06.452 [2024-07-24 22:09:45.575198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.452 [2024-07-24 22:09:45.575211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101a460 with addr=10.0.0.2, port=4420 00:22:06.452 [2024-07-24 22:09:45.575221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a460 is same with the state(5) to be set 00:22:06.452 [2024-07-24 22:09:45.575458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.452 [2024-07-24 22:09:45.575471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f2d90 with addr=10.0.0.2, port=4420 00:22:06.452 [2024-07-24 22:09:45.575480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2d90 is same with the state(5) to be set 00:22:06.452 [2024-07-24 22:09:45.575517] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:06.452 [2024-07-24 22:09:45.575531] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:06.452 [2024-07-24 22:09:45.575544] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:06.452 [2024-07-24 22:09:45.575557] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:06.452 [2024-07-24 22:09:45.575569] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:06.452 [2024-07-24 22:09:45.575581] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:06.452 [2024-07-24 22:09:45.576469] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:06.452 [2024-07-24 22:09:45.576483] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:06.452 [2024-07-24 22:09:45.576494] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:06.452 [2024-07-24 22:09:45.576504] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:06.452 [2024-07-24 22:09:45.576514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:06.452 [2024-07-24 22:09:45.576524] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:06.452 [2024-07-24 22:09:45.576585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf57420 (9): Bad file descriptor 00:22:06.452 [2024-07-24 22:09:45.576599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100dbd0 (9): Bad file descriptor 00:22:06.452 [2024-07-24 22:09:45.576610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x101a460 (9): Bad file descriptor 00:22:06.452 [2024-07-24 22:09:45.576621] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f2d90 (9): Bad file descriptor 00:22:06.452 [2024-07-24 22:09:45.577345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.453 [2024-07-24 22:09:45.577371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa37610 with addr=10.0.0.2, port=4420 00:22:06.453 [2024-07-24 22:09:45.577383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa37610 is same with the state(5) to be set 00:22:06.453 [2024-07-24 22:09:45.577577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.453 [2024-07-24 22:09:45.577589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf439a0 with addr=10.0.0.2, port=4420 00:22:06.453 [2024-07-24 22:09:45.577598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf439a0 is same with the state(5) to be set 00:22:06.453 [2024-07-24 22:09:45.577771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.453 [2024-07-24 22:09:45.577784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf34c30 with addr=10.0.0.2, port=4420 00:22:06.453 [2024-07-24 22:09:45.577793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34c30 is same with the state(5) to be set 00:22:06.453 [2024-07-24 22:09:45.578024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.453 [2024-07-24 22:09:45.578036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf59190 with addr=10.0.0.2, port=4420 00:22:06.453 [2024-07-24 22:09:45.578045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf59190 is same with the state(5) to be set 00:22:06.453 [2024-07-24 22:09:45.578354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.453 [2024-07-24 22:09:45.578368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf61340 with addr=10.0.0.2, port=4420 00:22:06.453 [2024-07-24 22:09:45.578380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf61340 is same with the state(5) to be set 00:22:06.453 [2024-07-24 22:09:45.578661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.453 [2024-07-24 22:09:45.578673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f9620 with addr=10.0.0.2, port=4420 00:22:06.453 [2024-07-24 22:09:45.578683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f9620 is same with the state(5) to be set 00:22:06.453 [2024-07-24 22:09:45.578692] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:06.453 [2024-07-24 22:09:45.578701] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:06.453 [2024-07-24 22:09:45.578712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:06.453 [2024-07-24 22:09:45.578733] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:06.453 [2024-07-24 22:09:45.578742] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:06.453 [2024-07-24 22:09:45.578750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:06.453 [2024-07-24 22:09:45.578761] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:22:06.453 [2024-07-24 22:09:45.578769] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:22:06.453 [2024-07-24 22:09:45.578778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:06.453 [2024-07-24 22:09:45.578788] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:06.453 [2024-07-24 22:09:45.578796] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:06.453 [2024-07-24 22:09:45.578805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:06.453 [2024-07-24 22:09:45.578864] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.453 [2024-07-24 22:09:45.578874] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.453 [2024-07-24 22:09:45.578882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.453 [2024-07-24 22:09:45.578889] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.453 [2024-07-24 22:09:45.578900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa37610 (9): Bad file descriptor 00:22:06.453 [2024-07-24 22:09:45.578913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf439a0 (9): Bad file descriptor 00:22:06.453 [2024-07-24 22:09:45.578924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf34c30 (9): Bad file descriptor 00:22:06.453 [2024-07-24 22:09:45.578934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf59190 (9): Bad file descriptor 00:22:06.453 [2024-07-24 22:09:45.578945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf61340 (9): Bad file descriptor 00:22:06.453 [2024-07-24 22:09:45.578956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f9620 (9): Bad file descriptor 00:22:06.453 [2024-07-24 22:09:45.578993] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:06.453 [2024-07-24 22:09:45.579003] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:06.453 [2024-07-24 22:09:45.579011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:06.453 [2024-07-24 22:09:45.579024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:06.453 [2024-07-24 22:09:45.579032] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:06.453 [2024-07-24 22:09:45.579041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:06.453 [2024-07-24 22:09:45.579051] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:06.453 [2024-07-24 22:09:45.579059] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:06.453 [2024-07-24 22:09:45.579068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:06.453 [2024-07-24 22:09:45.579078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:06.453 [2024-07-24 22:09:45.579086] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:06.453 [2024-07-24 22:09:45.579095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:06.453 [2024-07-24 22:09:45.579105] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:06.453 [2024-07-24 22:09:45.579113] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:06.453 [2024-07-24 22:09:45.579121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:06.453 [2024-07-24 22:09:45.579132] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:06.453 [2024-07-24 22:09:45.579141] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:06.453 [2024-07-24 22:09:45.579149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:06.453 [2024-07-24 22:09:45.579174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.453 [2024-07-24 22:09:45.579183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.453 [2024-07-24 22:09:45.579191] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.453 [2024-07-24 22:09:45.579198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.453 [2024-07-24 22:09:45.579206] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.453 [2024-07-24 22:09:45.579214] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.713 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:22:06.713 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:22:08.092 22:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2753630 00:22:08.092 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2753630) - No such process 00:22:08.092 22:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:22:08.092 22:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:22:08.092 22:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:08.092 22:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:08.092 22:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:08.092 22:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:08.092 22:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:08.092 22:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:22:08.092 22:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:08.092 22:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:22:08.092 22:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:08.092 22:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:08.093 rmmod nvme_tcp 00:22:08.093 rmmod nvme_fabrics 00:22:08.093 rmmod nvme_keyring 00:22:08.093 22:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:08.093 22:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:22:08.093 22:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:22:08.093 22:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:22:08.093 22:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:08.093 22:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:08.093 22:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:08.093 22:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:08.093 22:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:08.093 22:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.093 22:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:08.093 22:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:10.000 22:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:10.000 00:22:10.000 real 0m7.654s 00:22:10.000 user 0m17.810s 00:22:10.000 sys 0m1.451s 00:22:10.000 22:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:10.000 22:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:10.000 ************************************ 00:22:10.000 END TEST nvmf_shutdown_tc3 00:22:10.000 ************************************ 00:22:10.000 22:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:22:10.000 00:22:10.000 real 0m32.741s 00:22:10.000 user 1m16.924s 00:22:10.000 sys 0m10.438s 00:22:10.000 22:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:10.000 22:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:10.000 ************************************ 00:22:10.000 END TEST nvmf_shutdown 00:22:10.000 ************************************ 00:22:10.000 22:09:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:22:10.000 00:22:10.000 real 11m15.349s 00:22:10.000 user 23m46.562s 00:22:10.000 sys 3m58.532s 00:22:10.000 22:09:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:10.000 22:09:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:10.000 ************************************ 00:22:10.000 END TEST nvmf_target_extra 00:22:10.000 ************************************ 00:22:10.259 22:09:49 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:10.259 22:09:49 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:10.259 22:09:49 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:10.259 22:09:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:10.259 ************************************ 00:22:10.259 START TEST nvmf_host 00:22:10.259 ************************************ 00:22:10.259 22:09:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:10.259 * Looking for test storage... 00:22:10.259 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:22:10.259 22:09:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:10.259 22:09:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:22:10.259 22:09:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:10.259 22:09:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:10.259 22:09:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:10.259 22:09:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:10.259 22:09:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:10.259 22:09:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:10.259 22:09:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:10.259 22:09:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:10.259 22:09:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:10.259 22:09:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:10.259 22:09:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:10.259 22:09:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:22:10.259 22:09:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:10.259 22:09:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:10.259 22:09:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:10.259 22:09:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:10.259 22:09:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:10.259 22:09:49 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:10.259 22:09:49 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:10.259 22:09:49 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:10.259 22:09:49 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.259 22:09:49 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.259 22:09:49 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.259 22:09:49 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:22:10.259 22:09:49 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.259 22:09:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:22:10.259 22:09:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:10.259 22:09:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:10.259 22:09:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:10.259 22:09:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:10.259 22:09:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:10.259 22:09:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:10.259 22:09:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:10.259 22:09:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:10.259 22:09:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:10.259 22:09:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:22:10.259 22:09:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:22:10.259 22:09:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:10.259 22:09:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:10.259 22:09:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:10.259 22:09:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.259 ************************************ 00:22:10.259 START TEST nvmf_multicontroller 00:22:10.259 ************************************ 00:22:10.259 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:10.518 * Looking for test storage... 00:22:10.518 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:10.518 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:10.518 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:10.518 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:10.518 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:10.518 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:10.518 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:10.518 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:10.519 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:10.519 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:10.519 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:10.519 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:10.519 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:10.519 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:10.519 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:22:10.519 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:10.519 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:10.519 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:10.519 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:10.519 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:10.519 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:10.519 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:10.519 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:10.519 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.519 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.519 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.519 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:10.519 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.519 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:22:10.519 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:10.519 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:10.519 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:10.519 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:10.519 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:10.519 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:10.519 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:10.519 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:10.519 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:10.519 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:10.519 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:10.519 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:10.519 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:10.519 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:10.519 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:10.519 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:10.519 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:10.519 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:10.519 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:10.519 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:10.519 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:10.519 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:10.519 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:10.519 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:10.519 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:10.519 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:22:10.519 22:09:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:17.091 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:17.091 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:17.091 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.092 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:17.092 Found net devices under 0000:af:00.0: cvl_0_0 00:22:17.092 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.092 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:17.092 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.092 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:17.092 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:17.092 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:17.092 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:17.092 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.092 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:17.092 Found net devices under 0000:af:00.1: cvl_0_1 00:22:17.092 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.092 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:17.092 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:22:17.092 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:17.092 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:17.092 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:17.092 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:17.092 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:17.092 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:17.092 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:17.092 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:17.092 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:17.092 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:17.092 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:17.092 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:17.092 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:17.092 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:17.092 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:17.092 22:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:17.092 22:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:17.092 22:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:17.092 22:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:17.092 22:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:17.092 22:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:17.092 22:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:17.092 22:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:17.092 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:17.092 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:22:17.092 00:22:17.092 --- 10.0.0.2 ping statistics --- 00:22:17.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.092 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:22:17.092 22:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:17.092 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:17.092 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:22:17.092 00:22:17.092 --- 10.0.0.1 ping statistics --- 00:22:17.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.092 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:22:17.092 22:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:17.092 22:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:22:17.092 22:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:17.092 22:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:17.092 22:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:17.092 22:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:17.092 22:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:17.092 22:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:17.092 22:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:17.092 22:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:17.092 22:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:17.092 22:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:17.092 22:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:17.092 22:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=2757929 00:22:17.092 22:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:17.092 22:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 2757929 00:22:17.092 22:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 2757929 ']' 00:22:17.092 22:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:17.092 22:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:17.092 22:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:17.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:17.092 22:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:17.092 22:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:17.092 [2024-07-24 22:09:56.294643] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:22:17.092 [2024-07-24 22:09:56.294692] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:17.352 EAL: No free 2048 kB hugepages reported on node 1 00:22:17.352 [2024-07-24 22:09:56.367165] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:17.352 [2024-07-24 22:09:56.442034] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:17.352 [2024-07-24 22:09:56.442067] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:17.352 [2024-07-24 22:09:56.442078] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:17.352 [2024-07-24 22:09:56.442087] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:17.352 [2024-07-24 22:09:56.442094] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:17.352 [2024-07-24 22:09:56.442193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:17.352 [2024-07-24 22:09:56.442217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:17.352 [2024-07-24 22:09:56.442219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:17.920 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:17.920 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:22:17.920 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:17.920 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:17.920 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.179 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:18.179 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:18.179 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.179 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.179 [2024-07-24 22:09:57.154007] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:18.179 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.179 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:18.179 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.179 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.179 Malloc0 00:22:18.179 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.179 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:18.179 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.179 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.179 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.179 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:18.179 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.179 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.179 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.179 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:18.179 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.179 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.179 [2024-07-24 22:09:57.219293] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:18.179 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.179 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:18.179 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.179 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.179 [2024-07-24 22:09:57.227239] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:18.179 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.179 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:18.179 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.179 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.179 Malloc1 00:22:18.179 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.179 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:18.179 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.179 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.179 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.179 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:18.179 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.179 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.179 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.179 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:18.179 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.179 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.179 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.179 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:18.179 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.179 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.179 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.179 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2758208 00:22:18.179 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:18.179 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:18.179 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2758208 /var/tmp/bdevperf.sock 00:22:18.180 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 2758208 ']' 00:22:18.180 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:18.180 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:18.180 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:18.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:18.180 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:18.180 22:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:19.117 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:19.117 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:22:19.117 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:19.117 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.117 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:19.377 NVMe0n1 00:22:19.377 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.377 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:19.377 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.377 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:19.377 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:19.377 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.377 1 00:22:19.377 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:19.377 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:19.377 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:19.377 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:19.377 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:19.377 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:19.377 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:19.377 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:19.377 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.377 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:19.377 request: 00:22:19.377 { 00:22:19.377 "name": "NVMe0", 00:22:19.377 "trtype": "tcp", 00:22:19.377 "traddr": "10.0.0.2", 00:22:19.377 "adrfam": "ipv4", 00:22:19.377 "trsvcid": "4420", 00:22:19.377 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:19.377 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:19.377 "hostaddr": "10.0.0.2", 00:22:19.377 "hostsvcid": "60000", 00:22:19.377 "prchk_reftag": false, 00:22:19.377 "prchk_guard": false, 00:22:19.377 "hdgst": false, 00:22:19.377 "ddgst": false, 00:22:19.377 "method": "bdev_nvme_attach_controller", 00:22:19.377 "req_id": 1 00:22:19.377 } 00:22:19.377 Got JSON-RPC error response 00:22:19.377 response: 00:22:19.377 { 00:22:19.377 "code": -114, 00:22:19.377 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:19.377 } 00:22:19.377 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:19.377 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:19.377 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:19.377 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:19.377 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:19.377 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:19.377 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:19.377 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:19.377 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:19.377 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:19.377 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:19.377 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:19.377 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:19.377 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.377 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:19.377 request: 00:22:19.377 { 00:22:19.377 "name": "NVMe0", 00:22:19.377 "trtype": "tcp", 00:22:19.377 "traddr": "10.0.0.2", 00:22:19.377 "adrfam": "ipv4", 00:22:19.377 "trsvcid": "4420", 00:22:19.377 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:19.377 "hostaddr": "10.0.0.2", 00:22:19.377 "hostsvcid": "60000", 00:22:19.377 "prchk_reftag": false, 00:22:19.377 "prchk_guard": false, 00:22:19.377 "hdgst": false, 00:22:19.377 "ddgst": false, 00:22:19.377 "method": "bdev_nvme_attach_controller", 00:22:19.377 "req_id": 1 00:22:19.377 } 00:22:19.377 Got JSON-RPC error response 00:22:19.377 response: 00:22:19.377 { 00:22:19.377 "code": -114, 00:22:19.377 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:19.377 } 00:22:19.377 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:19.377 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:19.377 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:19.378 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:19.378 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:19.378 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:19.378 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:19.378 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:19.378 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:19.378 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:19.378 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:19.378 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:19.378 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:19.378 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.378 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:19.378 request: 00:22:19.378 { 00:22:19.378 "name": "NVMe0", 00:22:19.378 "trtype": "tcp", 00:22:19.378 "traddr": "10.0.0.2", 00:22:19.378 "adrfam": "ipv4", 00:22:19.378 "trsvcid": "4420", 00:22:19.378 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:19.378 "hostaddr": "10.0.0.2", 00:22:19.378 "hostsvcid": "60000", 00:22:19.378 "prchk_reftag": false, 00:22:19.378 "prchk_guard": false, 00:22:19.378 "hdgst": false, 00:22:19.378 "ddgst": false, 00:22:19.378 "multipath": "disable", 00:22:19.378 "method": "bdev_nvme_attach_controller", 00:22:19.378 "req_id": 1 00:22:19.378 } 00:22:19.378 Got JSON-RPC error response 00:22:19.378 response: 00:22:19.378 { 00:22:19.378 "code": -114, 00:22:19.378 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:22:19.378 } 00:22:19.378 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:19.378 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:19.378 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:19.378 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:19.378 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:19.378 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:19.378 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:19.378 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:19.378 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:19.378 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:19.378 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:19.378 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:19.378 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:19.378 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.378 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:19.378 request: 00:22:19.378 { 00:22:19.378 "name": "NVMe0", 00:22:19.378 "trtype": "tcp", 00:22:19.378 "traddr": "10.0.0.2", 00:22:19.378 "adrfam": "ipv4", 00:22:19.378 "trsvcid": "4420", 00:22:19.378 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:19.378 "hostaddr": "10.0.0.2", 00:22:19.378 "hostsvcid": "60000", 00:22:19.378 "prchk_reftag": false, 00:22:19.378 "prchk_guard": false, 00:22:19.378 "hdgst": false, 00:22:19.378 "ddgst": false, 00:22:19.378 "multipath": "failover", 00:22:19.378 "method": "bdev_nvme_attach_controller", 00:22:19.378 "req_id": 1 00:22:19.378 } 00:22:19.378 Got JSON-RPC error response 00:22:19.378 response: 00:22:19.378 { 00:22:19.378 "code": -114, 00:22:19.378 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:19.378 } 00:22:19.378 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:19.378 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:19.378 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:19.378 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:19.378 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:19.378 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:19.378 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.378 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:19.378 00:22:19.378 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.378 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:19.378 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.378 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:19.378 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.378 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:19.378 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.378 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:19.638 00:22:19.638 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.638 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:19.638 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:19.638 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.638 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:19.638 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.638 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:19.638 22:09:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:21.017 0 00:22:21.017 22:09:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:21.017 22:09:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.017 22:09:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:21.017 22:09:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.017 22:09:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 2758208 00:22:21.017 22:09:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 2758208 ']' 00:22:21.017 22:09:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 2758208 00:22:21.017 22:09:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:22:21.017 22:09:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:21.017 22:09:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2758208 00:22:21.017 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:21.017 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:21.017 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2758208' 00:22:21.017 killing process with pid 2758208 00:22:21.017 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 2758208 00:22:21.017 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 2758208 00:22:21.017 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:21.017 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.017 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:21.017 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.017 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:21.017 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.017 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:21.277 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.277 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:22:21.277 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:21.277 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:22:21.277 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:21.277 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:22:21.277 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:22:21.277 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:21.277 [2024-07-24 22:09:57.334253] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:22:21.277 [2024-07-24 22:09:57.334307] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2758208 ] 00:22:21.277 EAL: No free 2048 kB hugepages reported on node 1 00:22:21.277 [2024-07-24 22:09:57.404709] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.277 [2024-07-24 22:09:57.473518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:21.277 [2024-07-24 22:09:58.812933] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 7aaf0f09-b2d8-43c3-b154-313f1b8b8cfd already exists 00:22:21.277 [2024-07-24 22:09:58.812965] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:7aaf0f09-b2d8-43c3-b154-313f1b8b8cfd alias for bdev NVMe1n1 00:22:21.277 [2024-07-24 22:09:58.812975] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:21.277 Running I/O for 1 seconds... 00:22:21.277 00:22:21.277 Latency(us) 00:22:21.277 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.277 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:21.277 NVMe0n1 : 1.00 25794.14 100.76 0.00 0.00 4952.41 3591.37 14889.78 00:22:21.277 =================================================================================================================== 00:22:21.277 Total : 25794.14 100.76 0.00 0.00 4952.41 3591.37 14889.78 00:22:21.278 Received shutdown signal, test time was about 1.000000 seconds 00:22:21.278 00:22:21.278 Latency(us) 00:22:21.278 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.278 =================================================================================================================== 00:22:21.278 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:21.278 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:21.278 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:21.278 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:22:21.278 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:22:21.278 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:21.278 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:22:21.278 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:21.278 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:22:21.278 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:21.278 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:21.278 rmmod nvme_tcp 00:22:21.278 rmmod nvme_fabrics 00:22:21.278 rmmod nvme_keyring 00:22:21.278 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:21.278 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:22:21.278 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:22:21.278 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 2757929 ']' 00:22:21.278 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 2757929 00:22:21.278 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 2757929 ']' 00:22:21.278 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 2757929 00:22:21.278 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:22:21.278 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:21.278 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2757929 00:22:21.278 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:21.278 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:21.278 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2757929' 00:22:21.278 killing process with pid 2757929 00:22:21.278 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 2757929 00:22:21.278 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 2757929 00:22:21.537 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:21.537 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:21.537 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:21.537 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:21.537 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:21.537 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.537 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:21.537 22:10:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:24.076 22:10:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:24.076 00:22:24.076 real 0m13.230s 00:22:24.076 user 0m17.349s 00:22:24.076 sys 0m6.049s 00:22:24.076 22:10:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:24.076 22:10:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:24.076 ************************************ 00:22:24.076 END TEST nvmf_multicontroller 00:22:24.076 ************************************ 00:22:24.076 22:10:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:24.076 22:10:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:24.076 22:10:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:24.076 22:10:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.076 ************************************ 00:22:24.076 START TEST nvmf_aer 00:22:24.076 ************************************ 00:22:24.076 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:24.076 * Looking for test storage... 00:22:24.076 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:24.077 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:24.077 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:24.077 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:24.077 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:24.077 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:24.077 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:24.077 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:24.077 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:24.077 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:24.077 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:24.077 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:24.077 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:24.077 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:24.077 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:22:24.077 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:24.077 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:24.077 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:24.077 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:24.077 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:24.077 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:24.077 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:24.077 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:24.077 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.077 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.077 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.077 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:24.077 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.077 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:22:24.077 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:24.077 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:24.077 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:24.077 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:24.077 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:24.077 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:24.077 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:24.077 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:24.077 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:24.077 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:24.077 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:24.077 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:24.077 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:24.077 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:24.077 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.077 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:24.077 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:24.077 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:24.077 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:24.077 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:22:24.077 22:10:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:30.668 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:30.668 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:30.668 Found net devices under 0000:af:00.0: cvl_0_0 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:30.668 Found net devices under 0000:af:00.1: cvl_0_1 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:30.668 22:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:30.668 22:10:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:30.668 22:10:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:30.668 22:10:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:30.668 22:10:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:30.668 22:10:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:30.668 22:10:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:30.668 22:10:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:30.668 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:30.668 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:22:30.668 00:22:30.668 --- 10.0.0.2 ping statistics --- 00:22:30.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.668 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:22:30.668 22:10:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:30.668 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:30.668 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:22:30.668 00:22:30.668 --- 10.0.0.1 ping statistics --- 00:22:30.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.668 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:22:30.668 22:10:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:30.669 22:10:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:22:30.669 22:10:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:30.669 22:10:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:30.669 22:10:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:30.669 22:10:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:30.669 22:10:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:30.669 22:10:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:30.669 22:10:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:30.669 22:10:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:30.669 22:10:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:30.669 22:10:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:30.669 22:10:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:30.669 22:10:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2762254 00:22:30.669 22:10:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2762254 00:22:30.669 22:10:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:30.669 22:10:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 2762254 ']' 00:22:30.669 22:10:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.669 22:10:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:30.669 22:10:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:30.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:30.669 22:10:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:30.669 22:10:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:30.669 [2024-07-24 22:10:09.333080] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:22:30.669 [2024-07-24 22:10:09.333134] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:30.669 EAL: No free 2048 kB hugepages reported on node 1 00:22:30.669 [2024-07-24 22:10:09.408496] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:30.669 [2024-07-24 22:10:09.484022] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:30.669 [2024-07-24 22:10:09.484061] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:30.669 [2024-07-24 22:10:09.484070] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:30.669 [2024-07-24 22:10:09.484079] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:30.669 [2024-07-24 22:10:09.484086] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:30.669 [2024-07-24 22:10:09.484134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:30.669 [2024-07-24 22:10:09.484240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:30.669 [2024-07-24 22:10:09.484269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:30.669 [2024-07-24 22:10:09.484269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:31.237 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:31.237 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:22:31.237 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:31.237 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:31.237 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.237 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:31.237 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:31.237 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.237 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.238 [2024-07-24 22:10:10.214106] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:31.238 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.238 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:31.238 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.238 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.238 Malloc0 00:22:31.238 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.238 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:31.238 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.238 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.238 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.238 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:31.238 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.238 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.238 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.238 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:31.238 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.238 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.238 [2024-07-24 22:10:10.268576] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:31.238 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.238 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:31.238 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.238 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.238 [ 00:22:31.238 { 00:22:31.238 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:31.238 "subtype": "Discovery", 00:22:31.238 "listen_addresses": [], 00:22:31.238 "allow_any_host": true, 00:22:31.238 "hosts": [] 00:22:31.238 }, 00:22:31.238 { 00:22:31.238 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:31.238 "subtype": "NVMe", 00:22:31.238 "listen_addresses": [ 00:22:31.238 { 00:22:31.238 "trtype": "TCP", 00:22:31.238 "adrfam": "IPv4", 00:22:31.238 "traddr": "10.0.0.2", 00:22:31.238 "trsvcid": "4420" 00:22:31.238 } 00:22:31.238 ], 00:22:31.238 "allow_any_host": true, 00:22:31.238 "hosts": [], 00:22:31.238 "serial_number": "SPDK00000000000001", 00:22:31.238 "model_number": "SPDK bdev Controller", 00:22:31.238 "max_namespaces": 2, 00:22:31.238 "min_cntlid": 1, 00:22:31.238 "max_cntlid": 65519, 00:22:31.238 "namespaces": [ 00:22:31.238 { 00:22:31.238 "nsid": 1, 00:22:31.238 "bdev_name": "Malloc0", 00:22:31.238 "name": "Malloc0", 00:22:31.238 "nguid": "944E746DCCA841B3B8708213742322AC", 00:22:31.238 "uuid": "944e746d-cca8-41b3-b870-8213742322ac" 00:22:31.238 } 00:22:31.238 ] 00:22:31.238 } 00:22:31.238 ] 00:22:31.238 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.238 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:31.238 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:31.238 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2762448 00:22:31.238 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:31.238 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:31.238 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:22:31.238 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:31.238 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:22:31.238 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:22:31.238 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:31.238 EAL: No free 2048 kB hugepages reported on node 1 00:22:31.238 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:31.238 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:22:31.238 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:22:31.238 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:31.498 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:31.498 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:31.498 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:22:31.498 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:31.498 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.498 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.498 Malloc1 00:22:31.498 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.498 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:31.498 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.498 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.498 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.498 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:31.498 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.498 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.498 Asynchronous Event Request test 00:22:31.498 Attaching to 10.0.0.2 00:22:31.498 Attached to 10.0.0.2 00:22:31.498 Registering asynchronous event callbacks... 00:22:31.498 Starting namespace attribute notice tests for all controllers... 00:22:31.498 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:31.498 aer_cb - Changed Namespace 00:22:31.498 Cleaning up... 00:22:31.498 [ 00:22:31.498 { 00:22:31.498 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:31.498 "subtype": "Discovery", 00:22:31.498 "listen_addresses": [], 00:22:31.498 "allow_any_host": true, 00:22:31.498 "hosts": [] 00:22:31.498 }, 00:22:31.498 { 00:22:31.498 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:31.498 "subtype": "NVMe", 00:22:31.498 "listen_addresses": [ 00:22:31.498 { 00:22:31.498 "trtype": "TCP", 00:22:31.498 "adrfam": "IPv4", 00:22:31.498 "traddr": "10.0.0.2", 00:22:31.498 "trsvcid": "4420" 00:22:31.498 } 00:22:31.498 ], 00:22:31.498 "allow_any_host": true, 00:22:31.498 "hosts": [], 00:22:31.498 "serial_number": "SPDK00000000000001", 00:22:31.498 "model_number": "SPDK bdev Controller", 00:22:31.498 "max_namespaces": 2, 00:22:31.498 "min_cntlid": 1, 00:22:31.498 "max_cntlid": 65519, 00:22:31.498 "namespaces": [ 00:22:31.498 { 00:22:31.498 "nsid": 1, 00:22:31.498 "bdev_name": "Malloc0", 00:22:31.498 "name": "Malloc0", 00:22:31.498 "nguid": "944E746DCCA841B3B8708213742322AC", 00:22:31.498 "uuid": "944e746d-cca8-41b3-b870-8213742322ac" 00:22:31.498 }, 00:22:31.498 { 00:22:31.498 "nsid": 2, 00:22:31.498 "bdev_name": "Malloc1", 00:22:31.498 "name": "Malloc1", 00:22:31.498 "nguid": "4774BC0FE67A4CDAB753BD923DE0987D", 00:22:31.498 "uuid": "4774bc0f-e67a-4cda-b753-bd923de0987d" 00:22:31.498 } 00:22:31.498 ] 00:22:31.498 } 00:22:31.498 ] 00:22:31.498 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.498 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2762448 00:22:31.498 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:31.498 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.498 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.498 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.499 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:31.499 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.499 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.499 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.499 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:31.499 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.499 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.499 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.499 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:31.499 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:31.499 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:31.499 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:22:31.499 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:31.499 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:22:31.499 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:31.499 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:31.499 rmmod nvme_tcp 00:22:31.499 rmmod nvme_fabrics 00:22:31.499 rmmod nvme_keyring 00:22:31.499 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:31.759 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:22:31.759 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:22:31.759 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2762254 ']' 00:22:31.759 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2762254 00:22:31.759 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 2762254 ']' 00:22:31.759 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 2762254 00:22:31.759 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:22:31.759 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:31.759 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2762254 00:22:31.759 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:31.759 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:31.759 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2762254' 00:22:31.759 killing process with pid 2762254 00:22:31.759 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 2762254 00:22:31.760 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 2762254 00:22:31.760 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:31.760 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:31.760 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:31.760 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:31.760 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:31.760 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.760 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:31.760 22:10:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.305 22:10:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:34.305 00:22:34.305 real 0m10.267s 00:22:34.305 user 0m7.586s 00:22:34.305 sys 0m5.427s 00:22:34.305 22:10:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:34.305 22:10:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:34.305 ************************************ 00:22:34.305 END TEST nvmf_aer 00:22:34.305 ************************************ 00:22:34.305 22:10:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:34.305 22:10:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:34.305 22:10:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:34.305 22:10:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.305 ************************************ 00:22:34.305 START TEST nvmf_async_init 00:22:34.305 ************************************ 00:22:34.305 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:34.305 * Looking for test storage... 00:22:34.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:34.305 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:34.305 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:34.305 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:34.305 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:34.305 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:34.305 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:34.305 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:34.305 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:34.305 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:34.305 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:34.305 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:34.305 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:34.305 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:34.305 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:22:34.305 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:34.305 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:34.305 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:34.305 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:34.305 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:34.305 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:34.305 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:34.305 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:34.305 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.306 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.306 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.306 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:34.306 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.306 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:22:34.306 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:34.306 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:34.306 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:34.306 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:34.306 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:34.306 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:34.306 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:34.306 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:34.306 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:34.306 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:34.306 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:34.306 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:34.306 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:34.306 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:34.306 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=5c47c9f730dc485cb1730aecec4557f1 00:22:34.306 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:34.306 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:34.306 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:34.306 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:34.306 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:34.306 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:34.306 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.306 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:34.306 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.306 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:34.306 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:34.306 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:22:34.306 22:10:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:40.946 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:40.946 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:22:40.946 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:40.946 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:40.946 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:40.946 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:40.946 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:40.946 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:22:40.946 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:40.946 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:22:40.946 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:22:40.946 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:22:40.946 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:22:40.946 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:22:40.946 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:22:40.946 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:40.947 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:40.947 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:40.947 Found net devices under 0000:af:00.0: cvl_0_0 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:40.947 Found net devices under 0000:af:00.1: cvl_0_1 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:40.947 22:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:40.947 22:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:40.947 22:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:40.947 22:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:40.947 22:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:40.947 22:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:40.947 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:40.947 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:22:40.947 00:22:40.947 --- 10.0.0.2 ping statistics --- 00:22:40.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.947 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:22:40.947 22:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:40.947 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:40.947 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:22:40.947 00:22:40.947 --- 10.0.0.1 ping statistics --- 00:22:40.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.947 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:22:40.947 22:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:40.947 22:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:22:40.947 22:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:40.947 22:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:40.947 22:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:40.947 22:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:40.947 22:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:40.947 22:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:40.947 22:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:41.206 22:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:41.206 22:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:41.206 22:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:41.207 22:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:41.207 22:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2766189 00:22:41.207 22:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:41.207 22:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2766189 00:22:41.207 22:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 2766189 ']' 00:22:41.207 22:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:41.207 22:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:41.207 22:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:41.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:41.207 22:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:41.207 22:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:41.207 [2024-07-24 22:10:20.247960] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:22:41.207 [2024-07-24 22:10:20.248009] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:41.207 EAL: No free 2048 kB hugepages reported on node 1 00:22:41.207 [2024-07-24 22:10:20.323876] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.207 [2024-07-24 22:10:20.392742] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:41.207 [2024-07-24 22:10:20.392780] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:41.207 [2024-07-24 22:10:20.392789] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:41.207 [2024-07-24 22:10:20.392798] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:41.207 [2024-07-24 22:10:20.392805] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:41.207 [2024-07-24 22:10:20.392836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:42.145 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:42.145 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:22:42.145 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:42.145 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:42.145 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:42.145 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:42.145 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:42.145 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.145 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:42.145 [2024-07-24 22:10:21.094336] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:42.145 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.145 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:42.145 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.145 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:42.145 null0 00:22:42.145 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.145 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:42.145 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.145 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:42.145 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.145 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:42.145 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.145 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:42.145 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.145 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 5c47c9f730dc485cb1730aecec4557f1 00:22:42.145 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.145 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:42.145 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.145 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:42.145 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.145 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:42.145 [2024-07-24 22:10:21.134564] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:42.145 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.145 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:42.145 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.145 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:42.404 nvme0n1 00:22:42.404 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.404 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:42.404 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.404 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:42.404 [ 00:22:42.404 { 00:22:42.404 "name": "nvme0n1", 00:22:42.404 "aliases": [ 00:22:42.404 "5c47c9f7-30dc-485c-b173-0aecec4557f1" 00:22:42.404 ], 00:22:42.404 "product_name": "NVMe disk", 00:22:42.404 "block_size": 512, 00:22:42.404 "num_blocks": 2097152, 00:22:42.404 "uuid": "5c47c9f7-30dc-485c-b173-0aecec4557f1", 00:22:42.404 "assigned_rate_limits": { 00:22:42.404 "rw_ios_per_sec": 0, 00:22:42.404 "rw_mbytes_per_sec": 0, 00:22:42.404 "r_mbytes_per_sec": 0, 00:22:42.404 "w_mbytes_per_sec": 0 00:22:42.404 }, 00:22:42.404 "claimed": false, 00:22:42.404 "zoned": false, 00:22:42.404 "supported_io_types": { 00:22:42.404 "read": true, 00:22:42.404 "write": true, 00:22:42.404 "unmap": false, 00:22:42.404 "flush": true, 00:22:42.404 "reset": true, 00:22:42.404 "nvme_admin": true, 00:22:42.404 "nvme_io": true, 00:22:42.404 "nvme_io_md": false, 00:22:42.404 "write_zeroes": true, 00:22:42.404 "zcopy": false, 00:22:42.404 "get_zone_info": false, 00:22:42.404 "zone_management": false, 00:22:42.404 "zone_append": false, 00:22:42.404 "compare": true, 00:22:42.404 "compare_and_write": true, 00:22:42.404 "abort": true, 00:22:42.404 "seek_hole": false, 00:22:42.404 "seek_data": false, 00:22:42.404 "copy": true, 00:22:42.404 "nvme_iov_md": false 00:22:42.404 }, 00:22:42.404 "memory_domains": [ 00:22:42.404 { 00:22:42.404 "dma_device_id": "system", 00:22:42.404 "dma_device_type": 1 00:22:42.404 } 00:22:42.404 ], 00:22:42.404 "driver_specific": { 00:22:42.404 "nvme": [ 00:22:42.404 { 00:22:42.404 "trid": { 00:22:42.404 "trtype": "TCP", 00:22:42.404 "adrfam": "IPv4", 00:22:42.404 "traddr": "10.0.0.2", 00:22:42.404 "trsvcid": "4420", 00:22:42.404 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:42.404 }, 00:22:42.404 "ctrlr_data": { 00:22:42.404 "cntlid": 1, 00:22:42.404 "vendor_id": "0x8086", 00:22:42.404 "model_number": "SPDK bdev Controller", 00:22:42.404 "serial_number": "00000000000000000000", 00:22:42.404 "firmware_revision": "24.09", 00:22:42.404 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:42.404 "oacs": { 00:22:42.404 "security": 0, 00:22:42.404 "format": 0, 00:22:42.404 "firmware": 0, 00:22:42.404 "ns_manage": 0 00:22:42.404 }, 00:22:42.404 "multi_ctrlr": true, 00:22:42.404 "ana_reporting": false 00:22:42.404 }, 00:22:42.404 "vs": { 00:22:42.404 "nvme_version": "1.3" 00:22:42.404 }, 00:22:42.404 "ns_data": { 00:22:42.404 "id": 1, 00:22:42.404 "can_share": true 00:22:42.404 } 00:22:42.404 } 00:22:42.404 ], 00:22:42.404 "mp_policy": "active_passive" 00:22:42.405 } 00:22:42.405 } 00:22:42.405 ] 00:22:42.405 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.405 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:42.405 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.405 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:42.405 [2024-07-24 22:10:21.383073] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:42.405 [2024-07-24 22:10:21.383136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10314d0 (9): Bad file descriptor 00:22:42.405 [2024-07-24 22:10:21.514791] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:42.405 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.405 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:42.405 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.405 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:42.405 [ 00:22:42.405 { 00:22:42.405 "name": "nvme0n1", 00:22:42.405 "aliases": [ 00:22:42.405 "5c47c9f7-30dc-485c-b173-0aecec4557f1" 00:22:42.405 ], 00:22:42.405 "product_name": "NVMe disk", 00:22:42.405 "block_size": 512, 00:22:42.405 "num_blocks": 2097152, 00:22:42.405 "uuid": "5c47c9f7-30dc-485c-b173-0aecec4557f1", 00:22:42.405 "assigned_rate_limits": { 00:22:42.405 "rw_ios_per_sec": 0, 00:22:42.405 "rw_mbytes_per_sec": 0, 00:22:42.405 "r_mbytes_per_sec": 0, 00:22:42.405 "w_mbytes_per_sec": 0 00:22:42.405 }, 00:22:42.405 "claimed": false, 00:22:42.405 "zoned": false, 00:22:42.405 "supported_io_types": { 00:22:42.405 "read": true, 00:22:42.405 "write": true, 00:22:42.405 "unmap": false, 00:22:42.405 "flush": true, 00:22:42.405 "reset": true, 00:22:42.405 "nvme_admin": true, 00:22:42.405 "nvme_io": true, 00:22:42.405 "nvme_io_md": false, 00:22:42.405 "write_zeroes": true, 00:22:42.405 "zcopy": false, 00:22:42.405 "get_zone_info": false, 00:22:42.405 "zone_management": false, 00:22:42.405 "zone_append": false, 00:22:42.405 "compare": true, 00:22:42.405 "compare_and_write": true, 00:22:42.405 "abort": true, 00:22:42.405 "seek_hole": false, 00:22:42.405 "seek_data": false, 00:22:42.405 "copy": true, 00:22:42.405 "nvme_iov_md": false 00:22:42.405 }, 00:22:42.405 "memory_domains": [ 00:22:42.405 { 00:22:42.405 "dma_device_id": "system", 00:22:42.405 "dma_device_type": 1 00:22:42.405 } 00:22:42.405 ], 00:22:42.405 "driver_specific": { 00:22:42.405 "nvme": [ 00:22:42.405 { 00:22:42.405 "trid": { 00:22:42.405 "trtype": "TCP", 00:22:42.405 "adrfam": "IPv4", 00:22:42.405 "traddr": "10.0.0.2", 00:22:42.405 "trsvcid": "4420", 00:22:42.405 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:42.405 }, 00:22:42.405 "ctrlr_data": { 00:22:42.405 "cntlid": 2, 00:22:42.405 "vendor_id": "0x8086", 00:22:42.405 "model_number": "SPDK bdev Controller", 00:22:42.405 "serial_number": "00000000000000000000", 00:22:42.405 "firmware_revision": "24.09", 00:22:42.405 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:42.405 "oacs": { 00:22:42.405 "security": 0, 00:22:42.405 "format": 0, 00:22:42.405 "firmware": 0, 00:22:42.405 "ns_manage": 0 00:22:42.405 }, 00:22:42.405 "multi_ctrlr": true, 00:22:42.405 "ana_reporting": false 00:22:42.405 }, 00:22:42.405 "vs": { 00:22:42.405 "nvme_version": "1.3" 00:22:42.405 }, 00:22:42.405 "ns_data": { 00:22:42.405 "id": 1, 00:22:42.405 "can_share": true 00:22:42.405 } 00:22:42.405 } 00:22:42.405 ], 00:22:42.405 "mp_policy": "active_passive" 00:22:42.405 } 00:22:42.405 } 00:22:42.405 ] 00:22:42.405 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.405 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:42.405 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.405 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:42.405 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.405 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:42.405 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.zIRLAWFYP5 00:22:42.405 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:42.405 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.zIRLAWFYP5 00:22:42.405 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:42.405 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.405 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:42.405 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.405 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:42.405 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.405 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:42.405 [2024-07-24 22:10:21.563627] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:42.405 [2024-07-24 22:10:21.563748] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:42.405 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.405 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zIRLAWFYP5 00:22:42.405 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.405 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:42.405 [2024-07-24 22:10:21.571647] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:42.405 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.405 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zIRLAWFYP5 00:22:42.405 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.405 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:42.405 [2024-07-24 22:10:21.579679] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:42.405 [2024-07-24 22:10:21.579721] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:42.663 nvme0n1 00:22:42.663 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.663 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:42.663 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.663 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:42.663 [ 00:22:42.663 { 00:22:42.663 "name": "nvme0n1", 00:22:42.663 "aliases": [ 00:22:42.663 "5c47c9f7-30dc-485c-b173-0aecec4557f1" 00:22:42.663 ], 00:22:42.663 "product_name": "NVMe disk", 00:22:42.663 "block_size": 512, 00:22:42.663 "num_blocks": 2097152, 00:22:42.663 "uuid": "5c47c9f7-30dc-485c-b173-0aecec4557f1", 00:22:42.663 "assigned_rate_limits": { 00:22:42.663 "rw_ios_per_sec": 0, 00:22:42.663 "rw_mbytes_per_sec": 0, 00:22:42.663 "r_mbytes_per_sec": 0, 00:22:42.663 "w_mbytes_per_sec": 0 00:22:42.663 }, 00:22:42.663 "claimed": false, 00:22:42.663 "zoned": false, 00:22:42.663 "supported_io_types": { 00:22:42.663 "read": true, 00:22:42.663 "write": true, 00:22:42.663 "unmap": false, 00:22:42.663 "flush": true, 00:22:42.663 "reset": true, 00:22:42.663 "nvme_admin": true, 00:22:42.663 "nvme_io": true, 00:22:42.663 "nvme_io_md": false, 00:22:42.663 "write_zeroes": true, 00:22:42.663 "zcopy": false, 00:22:42.663 "get_zone_info": false, 00:22:42.663 "zone_management": false, 00:22:42.663 "zone_append": false, 00:22:42.663 "compare": true, 00:22:42.663 "compare_and_write": true, 00:22:42.663 "abort": true, 00:22:42.663 "seek_hole": false, 00:22:42.663 "seek_data": false, 00:22:42.663 "copy": true, 00:22:42.663 "nvme_iov_md": false 00:22:42.663 }, 00:22:42.663 "memory_domains": [ 00:22:42.663 { 00:22:42.663 "dma_device_id": "system", 00:22:42.663 "dma_device_type": 1 00:22:42.663 } 00:22:42.663 ], 00:22:42.663 "driver_specific": { 00:22:42.663 "nvme": [ 00:22:42.663 { 00:22:42.663 "trid": { 00:22:42.663 "trtype": "TCP", 00:22:42.663 "adrfam": "IPv4", 00:22:42.663 "traddr": "10.0.0.2", 00:22:42.663 "trsvcid": "4421", 00:22:42.663 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:42.663 }, 00:22:42.663 "ctrlr_data": { 00:22:42.663 "cntlid": 3, 00:22:42.663 "vendor_id": "0x8086", 00:22:42.663 "model_number": "SPDK bdev Controller", 00:22:42.663 "serial_number": "00000000000000000000", 00:22:42.663 "firmware_revision": "24.09", 00:22:42.663 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:42.663 "oacs": { 00:22:42.663 "security": 0, 00:22:42.663 "format": 0, 00:22:42.663 "firmware": 0, 00:22:42.663 "ns_manage": 0 00:22:42.663 }, 00:22:42.663 "multi_ctrlr": true, 00:22:42.663 "ana_reporting": false 00:22:42.663 }, 00:22:42.663 "vs": { 00:22:42.663 "nvme_version": "1.3" 00:22:42.663 }, 00:22:42.663 "ns_data": { 00:22:42.663 "id": 1, 00:22:42.663 "can_share": true 00:22:42.663 } 00:22:42.663 } 00:22:42.663 ], 00:22:42.663 "mp_policy": "active_passive" 00:22:42.663 } 00:22:42.663 } 00:22:42.663 ] 00:22:42.663 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.663 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:42.663 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.663 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:42.663 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.663 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.zIRLAWFYP5 00:22:42.664 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:22:42.664 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:22:42.664 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:42.664 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:22:42.664 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:42.664 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:22:42.664 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:42.664 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:42.664 rmmod nvme_tcp 00:22:42.664 rmmod nvme_fabrics 00:22:42.664 rmmod nvme_keyring 00:22:42.664 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:42.664 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:22:42.664 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:22:42.664 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2766189 ']' 00:22:42.664 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2766189 00:22:42.664 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 2766189 ']' 00:22:42.664 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 2766189 00:22:42.664 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:22:42.664 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:42.664 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2766189 00:22:42.664 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:42.664 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:42.664 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2766189' 00:22:42.664 killing process with pid 2766189 00:22:42.664 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 2766189 00:22:42.664 [2024-07-24 22:10:21.798464] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:42.664 [2024-07-24 22:10:21.798493] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:42.664 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 2766189 00:22:42.922 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:42.922 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:42.922 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:42.922 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:42.922 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:42.922 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.922 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:42.922 22:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.826 22:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:44.826 00:22:44.826 real 0m10.902s 00:22:44.826 user 0m3.726s 00:22:44.826 sys 0m5.733s 00:22:44.826 22:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:44.826 22:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:44.826 ************************************ 00:22:44.826 END TEST nvmf_async_init 00:22:44.826 ************************************ 00:22:45.085 22:10:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:45.085 22:10:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:45.085 22:10:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:45.085 22:10:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.085 ************************************ 00:22:45.085 START TEST dma 00:22:45.085 ************************************ 00:22:45.085 22:10:24 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:45.085 * Looking for test storage... 00:22:45.085 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:45.085 22:10:24 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:45.085 22:10:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:22:45.085 22:10:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:45.085 22:10:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:45.085 22:10:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:45.085 22:10:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:45.085 22:10:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:45.085 22:10:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:45.085 22:10:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:45.085 22:10:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:45.085 22:10:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:45.085 22:10:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:45.085 22:10:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:45.085 22:10:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:22:45.085 22:10:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:45.085 22:10:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:45.085 22:10:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:45.085 22:10:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:45.085 22:10:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:45.085 22:10:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:45.085 22:10:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:45.085 22:10:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:45.085 22:10:24 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.085 22:10:24 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.086 22:10:24 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.086 22:10:24 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:22:45.086 22:10:24 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.086 22:10:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:22:45.086 22:10:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:45.086 22:10:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:45.086 22:10:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:45.086 22:10:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:45.086 22:10:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:45.086 22:10:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:45.086 22:10:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:45.086 22:10:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:45.086 22:10:24 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:45.086 22:10:24 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:22:45.086 00:22:45.086 real 0m0.126s 00:22:45.086 user 0m0.055s 00:22:45.086 sys 0m0.082s 00:22:45.086 22:10:24 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:45.086 22:10:24 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:45.086 ************************************ 00:22:45.086 END TEST dma 00:22:45.086 ************************************ 00:22:45.086 22:10:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:45.086 22:10:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:45.086 22:10:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:45.086 22:10:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.345 ************************************ 00:22:45.345 START TEST nvmf_identify 00:22:45.345 ************************************ 00:22:45.345 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:45.345 * Looking for test storage... 00:22:45.345 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:45.345 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:45.345 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:45.345 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:45.345 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:45.345 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:45.345 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:45.345 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:45.345 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:45.345 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:45.345 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:45.345 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:45.345 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:45.345 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:45.345 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:22:45.345 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:45.346 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:45.346 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:45.346 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:45.346 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:45.346 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:45.346 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:45.346 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:45.346 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.346 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.346 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.346 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:45.346 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.346 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:22:45.346 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:45.346 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:45.346 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:45.346 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:45.346 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:45.346 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:45.346 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:45.346 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:45.346 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:45.346 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:45.346 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:45.346 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:45.346 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:45.346 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:45.346 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:45.346 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:45.346 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.346 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:45.346 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.346 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:45.346 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:45.346 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:22:45.346 22:10:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:51.916 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:51.916 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:51.916 Found net devices under 0000:af:00.0: cvl_0_0 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:51.916 Found net devices under 0000:af:00.1: cvl_0_1 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:51.916 22:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:51.916 22:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:51.916 22:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:51.916 22:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:51.916 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:51.916 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:22:51.916 00:22:51.917 --- 10.0.0.2 ping statistics --- 00:22:51.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.917 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:22:51.917 22:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:51.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:51.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:22:51.917 00:22:51.917 --- 10.0.0.1 ping statistics --- 00:22:51.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.917 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:22:51.917 22:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:51.917 22:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:22:51.917 22:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:51.917 22:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:51.917 22:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:51.917 22:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:51.917 22:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:51.917 22:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:51.917 22:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:52.176 22:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:52.176 22:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:52.176 22:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:52.176 22:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2770133 00:22:52.176 22:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:52.176 22:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:52.176 22:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2770133 00:22:52.176 22:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 2770133 ']' 00:22:52.176 22:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:52.177 22:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:52.177 22:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:52.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:52.177 22:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:52.177 22:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:52.177 [2024-07-24 22:10:31.200154] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:22:52.177 [2024-07-24 22:10:31.200209] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:52.177 EAL: No free 2048 kB hugepages reported on node 1 00:22:52.177 [2024-07-24 22:10:31.274049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:52.177 [2024-07-24 22:10:31.350297] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:52.177 [2024-07-24 22:10:31.350334] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:52.177 [2024-07-24 22:10:31.350344] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:52.177 [2024-07-24 22:10:31.350353] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:52.177 [2024-07-24 22:10:31.350360] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:52.177 [2024-07-24 22:10:31.350407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:52.177 [2024-07-24 22:10:31.350504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:52.177 [2024-07-24 22:10:31.350594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:52.177 [2024-07-24 22:10:31.350596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:53.116 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:53.116 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:22:53.116 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:53.116 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.116 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:53.117 [2024-07-24 22:10:32.016949] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:53.117 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.117 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:53.117 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:53.117 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:53.117 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:53.117 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.117 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:53.117 Malloc0 00:22:53.117 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.117 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:53.117 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.117 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:53.117 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.117 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:53.117 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.117 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:53.117 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.117 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:53.117 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.117 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:53.117 [2024-07-24 22:10:32.115828] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:53.117 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.117 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:53.117 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.117 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:53.117 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.117 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:53.117 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.117 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:53.117 [ 00:22:53.117 { 00:22:53.117 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:53.117 "subtype": "Discovery", 00:22:53.117 "listen_addresses": [ 00:22:53.117 { 00:22:53.117 "trtype": "TCP", 00:22:53.117 "adrfam": "IPv4", 00:22:53.117 "traddr": "10.0.0.2", 00:22:53.117 "trsvcid": "4420" 00:22:53.117 } 00:22:53.117 ], 00:22:53.117 "allow_any_host": true, 00:22:53.117 "hosts": [] 00:22:53.117 }, 00:22:53.117 { 00:22:53.117 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:53.117 "subtype": "NVMe", 00:22:53.117 "listen_addresses": [ 00:22:53.117 { 00:22:53.117 "trtype": "TCP", 00:22:53.117 "adrfam": "IPv4", 00:22:53.117 "traddr": "10.0.0.2", 00:22:53.117 "trsvcid": "4420" 00:22:53.117 } 00:22:53.117 ], 00:22:53.117 "allow_any_host": true, 00:22:53.117 "hosts": [], 00:22:53.117 "serial_number": "SPDK00000000000001", 00:22:53.117 "model_number": "SPDK bdev Controller", 00:22:53.117 "max_namespaces": 32, 00:22:53.117 "min_cntlid": 1, 00:22:53.117 "max_cntlid": 65519, 00:22:53.117 "namespaces": [ 00:22:53.117 { 00:22:53.117 "nsid": 1, 00:22:53.117 "bdev_name": "Malloc0", 00:22:53.117 "name": "Malloc0", 00:22:53.117 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:53.117 "eui64": "ABCDEF0123456789", 00:22:53.117 "uuid": "acda57b8-a249-42b8-b840-dc16272b86ae" 00:22:53.117 } 00:22:53.117 ] 00:22:53.117 } 00:22:53.117 ] 00:22:53.117 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.117 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:53.117 [2024-07-24 22:10:32.173998] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:22:53.117 [2024-07-24 22:10:32.174039] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2770409 ] 00:22:53.117 EAL: No free 2048 kB hugepages reported on node 1 00:22:53.117 [2024-07-24 22:10:32.204106] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:22:53.117 [2024-07-24 22:10:32.204155] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:53.117 [2024-07-24 22:10:32.204162] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:53.117 [2024-07-24 22:10:32.204176] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:53.117 [2024-07-24 22:10:32.204185] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:53.117 [2024-07-24 22:10:32.204588] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:22:53.117 [2024-07-24 22:10:32.204616] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x657f00 0 00:22:53.117 [2024-07-24 22:10:32.218723] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:53.117 [2024-07-24 22:10:32.218739] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:53.117 [2024-07-24 22:10:32.218745] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:53.117 [2024-07-24 22:10:32.218749] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:53.117 [2024-07-24 22:10:32.218790] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.117 [2024-07-24 22:10:32.218796] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.117 [2024-07-24 22:10:32.218801] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x657f00) 00:22:53.117 [2024-07-24 22:10:32.218816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:53.117 [2024-07-24 22:10:32.218833] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6c2e40, cid 0, qid 0 00:22:53.117 [2024-07-24 22:10:32.226727] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.117 [2024-07-24 22:10:32.226735] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.117 [2024-07-24 22:10:32.226740] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.117 [2024-07-24 22:10:32.226745] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6c2e40) on tqpair=0x657f00 00:22:53.117 [2024-07-24 22:10:32.226755] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:53.117 [2024-07-24 22:10:32.226762] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:22:53.117 [2024-07-24 22:10:32.226769] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:22:53.117 [2024-07-24 22:10:32.226783] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.117 [2024-07-24 22:10:32.226788] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.117 [2024-07-24 22:10:32.226792] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x657f00) 00:22:53.117 [2024-07-24 22:10:32.226800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.117 [2024-07-24 22:10:32.226814] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6c2e40, cid 0, qid 0 00:22:53.117 [2024-07-24 22:10:32.226999] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.117 [2024-07-24 22:10:32.227006] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.117 [2024-07-24 22:10:32.227010] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.117 [2024-07-24 22:10:32.227015] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6c2e40) on tqpair=0x657f00 00:22:53.117 [2024-07-24 22:10:32.227024] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:22:53.117 [2024-07-24 22:10:32.227033] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:22:53.117 [2024-07-24 22:10:32.227042] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.117 [2024-07-24 22:10:32.227047] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.117 [2024-07-24 22:10:32.227051] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x657f00) 00:22:53.117 [2024-07-24 22:10:32.227058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.117 [2024-07-24 22:10:32.227071] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6c2e40, cid 0, qid 0 00:22:53.117 [2024-07-24 22:10:32.227159] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.117 [2024-07-24 22:10:32.227165] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.117 [2024-07-24 22:10:32.227170] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.117 [2024-07-24 22:10:32.227175] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6c2e40) on tqpair=0x657f00 00:22:53.117 [2024-07-24 22:10:32.227180] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:22:53.117 [2024-07-24 22:10:32.227190] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:22:53.117 [2024-07-24 22:10:32.227197] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.117 [2024-07-24 22:10:32.227202] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.117 [2024-07-24 22:10:32.227206] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x657f00) 00:22:53.117 [2024-07-24 22:10:32.227213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.117 [2024-07-24 22:10:32.227228] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6c2e40, cid 0, qid 0 00:22:53.117 [2024-07-24 22:10:32.227326] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.118 [2024-07-24 22:10:32.227333] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.118 [2024-07-24 22:10:32.227337] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.118 [2024-07-24 22:10:32.227342] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6c2e40) on tqpair=0x657f00 00:22:53.118 [2024-07-24 22:10:32.227348] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:53.118 [2024-07-24 22:10:32.227358] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.118 [2024-07-24 22:10:32.227363] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.118 [2024-07-24 22:10:32.227368] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x657f00) 00:22:53.118 [2024-07-24 22:10:32.227374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.118 [2024-07-24 22:10:32.227386] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6c2e40, cid 0, qid 0 00:22:53.118 [2024-07-24 22:10:32.227473] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.118 [2024-07-24 22:10:32.227480] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.118 [2024-07-24 22:10:32.227484] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.118 [2024-07-24 22:10:32.227489] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6c2e40) on tqpair=0x657f00 00:22:53.118 [2024-07-24 22:10:32.227494] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:22:53.118 [2024-07-24 22:10:32.227501] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:22:53.118 [2024-07-24 22:10:32.227509] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:53.118 [2024-07-24 22:10:32.227616] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:22:53.118 [2024-07-24 22:10:32.227622] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:53.118 [2024-07-24 22:10:32.227631] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.118 [2024-07-24 22:10:32.227635] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.118 [2024-07-24 22:10:32.227640] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x657f00) 00:22:53.118 [2024-07-24 22:10:32.227647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.118 [2024-07-24 22:10:32.227658] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6c2e40, cid 0, qid 0 00:22:53.118 [2024-07-24 22:10:32.227816] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.118 [2024-07-24 22:10:32.227823] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.118 [2024-07-24 22:10:32.227827] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.118 [2024-07-24 22:10:32.227832] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6c2e40) on tqpair=0x657f00 00:22:53.118 [2024-07-24 22:10:32.227837] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:53.118 [2024-07-24 22:10:32.227848] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.118 [2024-07-24 22:10:32.227853] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.118 [2024-07-24 22:10:32.227857] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x657f00) 00:22:53.118 [2024-07-24 22:10:32.227866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.118 [2024-07-24 22:10:32.227878] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6c2e40, cid 0, qid 0 00:22:53.118 [2024-07-24 22:10:32.227970] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.118 [2024-07-24 22:10:32.227976] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.118 [2024-07-24 22:10:32.227981] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.118 [2024-07-24 22:10:32.227985] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6c2e40) on tqpair=0x657f00 00:22:53.118 [2024-07-24 22:10:32.227990] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:53.118 [2024-07-24 22:10:32.227996] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:22:53.118 [2024-07-24 22:10:32.228006] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:22:53.118 [2024-07-24 22:10:32.228015] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:22:53.118 [2024-07-24 22:10:32.228025] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.118 [2024-07-24 22:10:32.228030] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x657f00) 00:22:53.118 [2024-07-24 22:10:32.228037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.118 [2024-07-24 22:10:32.228048] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6c2e40, cid 0, qid 0 00:22:53.118 [2024-07-24 22:10:32.228186] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:53.118 [2024-07-24 22:10:32.228193] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:53.118 [2024-07-24 22:10:32.228198] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:53.118 [2024-07-24 22:10:32.228203] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x657f00): datao=0, datal=4096, cccid=0 00:22:53.118 [2024-07-24 22:10:32.228208] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6c2e40) on tqpair(0x657f00): expected_datao=0, payload_size=4096 00:22:53.118 [2024-07-24 22:10:32.228214] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.118 [2024-07-24 22:10:32.228301] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:53.118 [2024-07-24 22:10:32.228306] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:53.118 [2024-07-24 22:10:32.268853] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.118 [2024-07-24 22:10:32.268865] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.118 [2024-07-24 22:10:32.268870] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.118 [2024-07-24 22:10:32.268875] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6c2e40) on tqpair=0x657f00 00:22:53.118 [2024-07-24 22:10:32.268884] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:22:53.118 [2024-07-24 22:10:32.268891] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:22:53.118 [2024-07-24 22:10:32.268896] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:22:53.118 [2024-07-24 22:10:32.268903] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:22:53.118 [2024-07-24 22:10:32.268908] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:22:53.118 [2024-07-24 22:10:32.268915] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:22:53.118 [2024-07-24 22:10:32.268928] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:22:53.118 [2024-07-24 22:10:32.268939] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.118 [2024-07-24 22:10:32.268945] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.118 [2024-07-24 22:10:32.268949] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x657f00) 00:22:53.118 [2024-07-24 22:10:32.268958] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:53.118 [2024-07-24 22:10:32.268973] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6c2e40, cid 0, qid 0 00:22:53.118 [2024-07-24 22:10:32.269062] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.118 [2024-07-24 22:10:32.269069] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.118 [2024-07-24 22:10:32.269073] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.118 [2024-07-24 22:10:32.269078] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6c2e40) on tqpair=0x657f00 00:22:53.118 [2024-07-24 22:10:32.269085] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.118 [2024-07-24 22:10:32.269090] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.118 [2024-07-24 22:10:32.269095] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x657f00) 00:22:53.118 [2024-07-24 22:10:32.269101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.118 [2024-07-24 22:10:32.269108] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.118 [2024-07-24 22:10:32.269113] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.118 [2024-07-24 22:10:32.269117] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x657f00) 00:22:53.118 [2024-07-24 22:10:32.269123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.118 [2024-07-24 22:10:32.269130] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.118 [2024-07-24 22:10:32.269135] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.118 [2024-07-24 22:10:32.269139] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x657f00) 00:22:53.118 [2024-07-24 22:10:32.269145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.118 [2024-07-24 22:10:32.269152] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.118 [2024-07-24 22:10:32.269156] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.118 [2024-07-24 22:10:32.269161] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x657f00) 00:22:53.118 [2024-07-24 22:10:32.269167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.118 [2024-07-24 22:10:32.269173] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:22:53.118 [2024-07-24 22:10:32.269186] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:53.118 [2024-07-24 22:10:32.269193] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.118 [2024-07-24 22:10:32.269198] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x657f00) 00:22:53.118 [2024-07-24 22:10:32.269205] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.118 [2024-07-24 22:10:32.269218] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6c2e40, cid 0, qid 0 00:22:53.118 [2024-07-24 22:10:32.269224] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6c2fc0, cid 1, qid 0 00:22:53.119 [2024-07-24 22:10:32.269231] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6c3140, cid 2, qid 0 00:22:53.119 [2024-07-24 22:10:32.269237] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6c32c0, cid 3, qid 0 00:22:53.119 [2024-07-24 22:10:32.269242] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6c3440, cid 4, qid 0 00:22:53.119 [2024-07-24 22:10:32.269362] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.119 [2024-07-24 22:10:32.269369] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.119 [2024-07-24 22:10:32.269373] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.119 [2024-07-24 22:10:32.269378] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6c3440) on tqpair=0x657f00 00:22:53.119 [2024-07-24 22:10:32.269383] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:22:53.119 [2024-07-24 22:10:32.269390] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:22:53.119 [2024-07-24 22:10:32.269402] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.119 [2024-07-24 22:10:32.269407] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x657f00) 00:22:53.119 [2024-07-24 22:10:32.269413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.119 [2024-07-24 22:10:32.269425] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6c3440, cid 4, qid 0 00:22:53.119 [2024-07-24 22:10:32.269534] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:53.119 [2024-07-24 22:10:32.269541] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:53.119 [2024-07-24 22:10:32.269546] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:53.119 [2024-07-24 22:10:32.269550] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x657f00): datao=0, datal=4096, cccid=4 00:22:53.119 [2024-07-24 22:10:32.269556] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6c3440) on tqpair(0x657f00): expected_datao=0, payload_size=4096 00:22:53.119 [2024-07-24 22:10:32.269562] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.119 [2024-07-24 22:10:32.269569] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:53.119 [2024-07-24 22:10:32.269574] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:53.119 [2024-07-24 22:10:32.269674] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.119 [2024-07-24 22:10:32.269680] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.119 [2024-07-24 22:10:32.269685] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.119 [2024-07-24 22:10:32.269689] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6c3440) on tqpair=0x657f00 00:22:53.119 [2024-07-24 22:10:32.269702] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:22:53.119 [2024-07-24 22:10:32.269731] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.119 [2024-07-24 22:10:32.269737] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x657f00) 00:22:53.119 [2024-07-24 22:10:32.269744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.119 [2024-07-24 22:10:32.269752] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.119 [2024-07-24 22:10:32.269756] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.119 [2024-07-24 22:10:32.269761] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x657f00) 00:22:53.119 [2024-07-24 22:10:32.269767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.119 [2024-07-24 22:10:32.269783] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6c3440, cid 4, qid 0 00:22:53.119 [2024-07-24 22:10:32.269790] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6c35c0, cid 5, qid 0 00:22:53.119 [2024-07-24 22:10:32.269907] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:53.119 [2024-07-24 22:10:32.269914] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:53.119 [2024-07-24 22:10:32.269918] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:53.119 [2024-07-24 22:10:32.269923] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x657f00): datao=0, datal=1024, cccid=4 00:22:53.119 [2024-07-24 22:10:32.269928] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6c3440) on tqpair(0x657f00): expected_datao=0, payload_size=1024 00:22:53.119 [2024-07-24 22:10:32.269934] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.119 [2024-07-24 22:10:32.269941] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:53.119 [2024-07-24 22:10:32.269945] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:53.119 [2024-07-24 22:10:32.269952] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.119 [2024-07-24 22:10:32.269958] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.119 [2024-07-24 22:10:32.269962] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.119 [2024-07-24 22:10:32.269967] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6c35c0) on tqpair=0x657f00 00:22:53.119 [2024-07-24 22:10:32.314721] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.119 [2024-07-24 22:10:32.314732] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.119 [2024-07-24 22:10:32.314737] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.119 [2024-07-24 22:10:32.314742] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6c3440) on tqpair=0x657f00 00:22:53.119 [2024-07-24 22:10:32.314762] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.119 [2024-07-24 22:10:32.314767] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x657f00) 00:22:53.119 [2024-07-24 22:10:32.314776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.119 [2024-07-24 22:10:32.314796] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6c3440, cid 4, qid 0 00:22:53.119 [2024-07-24 22:10:32.314983] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:53.119 [2024-07-24 22:10:32.314992] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:53.119 [2024-07-24 22:10:32.314996] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:53.119 [2024-07-24 22:10:32.315001] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x657f00): datao=0, datal=3072, cccid=4 00:22:53.119 [2024-07-24 22:10:32.315007] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6c3440) on tqpair(0x657f00): expected_datao=0, payload_size=3072 00:22:53.119 [2024-07-24 22:10:32.315012] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.119 [2024-07-24 22:10:32.315020] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:53.119 [2024-07-24 22:10:32.315024] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:53.119 [2024-07-24 22:10:32.315119] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.119 [2024-07-24 22:10:32.315126] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.119 [2024-07-24 22:10:32.315131] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.119 [2024-07-24 22:10:32.315135] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6c3440) on tqpair=0x657f00 00:22:53.119 [2024-07-24 22:10:32.315144] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.119 [2024-07-24 22:10:32.315149] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x657f00) 00:22:53.119 [2024-07-24 22:10:32.315156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.119 [2024-07-24 22:10:32.315173] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6c3440, cid 4, qid 0 00:22:53.119 [2024-07-24 22:10:32.315265] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:53.119 [2024-07-24 22:10:32.315272] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:53.119 [2024-07-24 22:10:32.315276] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:53.119 [2024-07-24 22:10:32.315281] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x657f00): datao=0, datal=8, cccid=4 00:22:53.119 [2024-07-24 22:10:32.315287] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6c3440) on tqpair(0x657f00): expected_datao=0, payload_size=8 00:22:53.119 [2024-07-24 22:10:32.315292] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.119 [2024-07-24 22:10:32.315299] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:53.119 [2024-07-24 22:10:32.315303] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:53.383 [2024-07-24 22:10:32.356500] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.384 [2024-07-24 22:10:32.356512] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.384 [2024-07-24 22:10:32.356516] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.384 [2024-07-24 22:10:32.356521] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6c3440) on tqpair=0x657f00 00:22:53.384 ===================================================== 00:22:53.384 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:53.384 ===================================================== 00:22:53.384 Controller Capabilities/Features 00:22:53.384 ================================ 00:22:53.384 Vendor ID: 0000 00:22:53.384 Subsystem Vendor ID: 0000 00:22:53.384 Serial Number: .................... 00:22:53.384 Model Number: ........................................ 00:22:53.384 Firmware Version: 24.09 00:22:53.384 Recommended Arb Burst: 0 00:22:53.384 IEEE OUI Identifier: 00 00 00 00:22:53.384 Multi-path I/O 00:22:53.384 May have multiple subsystem ports: No 00:22:53.384 May have multiple controllers: No 00:22:53.384 Associated with SR-IOV VF: No 00:22:53.384 Max Data Transfer Size: 131072 00:22:53.384 Max Number of Namespaces: 0 00:22:53.384 Max Number of I/O Queues: 1024 00:22:53.384 NVMe Specification Version (VS): 1.3 00:22:53.384 NVMe Specification Version (Identify): 1.3 00:22:53.384 Maximum Queue Entries: 128 00:22:53.384 Contiguous Queues Required: Yes 00:22:53.384 Arbitration Mechanisms Supported 00:22:53.384 Weighted Round Robin: Not Supported 00:22:53.384 Vendor Specific: Not Supported 00:22:53.384 Reset Timeout: 15000 ms 00:22:53.384 Doorbell Stride: 4 bytes 00:22:53.384 NVM Subsystem Reset: Not Supported 00:22:53.384 Command Sets Supported 00:22:53.384 NVM Command Set: Supported 00:22:53.384 Boot Partition: Not Supported 00:22:53.384 Memory Page Size Minimum: 4096 bytes 00:22:53.384 Memory Page Size Maximum: 4096 bytes 00:22:53.384 Persistent Memory Region: Not Supported 00:22:53.384 Optional Asynchronous Events Supported 00:22:53.384 Namespace Attribute Notices: Not Supported 00:22:53.384 Firmware Activation Notices: Not Supported 00:22:53.384 ANA Change Notices: Not Supported 00:22:53.384 PLE Aggregate Log Change Notices: Not Supported 00:22:53.384 LBA Status Info Alert Notices: Not Supported 00:22:53.384 EGE Aggregate Log Change Notices: Not Supported 00:22:53.384 Normal NVM Subsystem Shutdown event: Not Supported 00:22:53.384 Zone Descriptor Change Notices: Not Supported 00:22:53.384 Discovery Log Change Notices: Supported 00:22:53.384 Controller Attributes 00:22:53.384 128-bit Host Identifier: Not Supported 00:22:53.384 Non-Operational Permissive Mode: Not Supported 00:22:53.384 NVM Sets: Not Supported 00:22:53.384 Read Recovery Levels: Not Supported 00:22:53.384 Endurance Groups: Not Supported 00:22:53.384 Predictable Latency Mode: Not Supported 00:22:53.384 Traffic Based Keep ALive: Not Supported 00:22:53.384 Namespace Granularity: Not Supported 00:22:53.384 SQ Associations: Not Supported 00:22:53.384 UUID List: Not Supported 00:22:53.384 Multi-Domain Subsystem: Not Supported 00:22:53.384 Fixed Capacity Management: Not Supported 00:22:53.384 Variable Capacity Management: Not Supported 00:22:53.384 Delete Endurance Group: Not Supported 00:22:53.384 Delete NVM Set: Not Supported 00:22:53.384 Extended LBA Formats Supported: Not Supported 00:22:53.384 Flexible Data Placement Supported: Not Supported 00:22:53.384 00:22:53.384 Controller Memory Buffer Support 00:22:53.384 ================================ 00:22:53.384 Supported: No 00:22:53.384 00:22:53.384 Persistent Memory Region Support 00:22:53.384 ================================ 00:22:53.384 Supported: No 00:22:53.384 00:22:53.384 Admin Command Set Attributes 00:22:53.384 ============================ 00:22:53.384 Security Send/Receive: Not Supported 00:22:53.384 Format NVM: Not Supported 00:22:53.384 Firmware Activate/Download: Not Supported 00:22:53.384 Namespace Management: Not Supported 00:22:53.384 Device Self-Test: Not Supported 00:22:53.384 Directives: Not Supported 00:22:53.384 NVMe-MI: Not Supported 00:22:53.384 Virtualization Management: Not Supported 00:22:53.384 Doorbell Buffer Config: Not Supported 00:22:53.384 Get LBA Status Capability: Not Supported 00:22:53.384 Command & Feature Lockdown Capability: Not Supported 00:22:53.384 Abort Command Limit: 1 00:22:53.384 Async Event Request Limit: 4 00:22:53.384 Number of Firmware Slots: N/A 00:22:53.384 Firmware Slot 1 Read-Only: N/A 00:22:53.384 Firmware Activation Without Reset: N/A 00:22:53.384 Multiple Update Detection Support: N/A 00:22:53.384 Firmware Update Granularity: No Information Provided 00:22:53.384 Per-Namespace SMART Log: No 00:22:53.384 Asymmetric Namespace Access Log Page: Not Supported 00:22:53.384 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:53.384 Command Effects Log Page: Not Supported 00:22:53.384 Get Log Page Extended Data: Supported 00:22:53.384 Telemetry Log Pages: Not Supported 00:22:53.384 Persistent Event Log Pages: Not Supported 00:22:53.384 Supported Log Pages Log Page: May Support 00:22:53.384 Commands Supported & Effects Log Page: Not Supported 00:22:53.384 Feature Identifiers & Effects Log Page:May Support 00:22:53.384 NVMe-MI Commands & Effects Log Page: May Support 00:22:53.384 Data Area 4 for Telemetry Log: Not Supported 00:22:53.384 Error Log Page Entries Supported: 128 00:22:53.384 Keep Alive: Not Supported 00:22:53.384 00:22:53.384 NVM Command Set Attributes 00:22:53.384 ========================== 00:22:53.384 Submission Queue Entry Size 00:22:53.384 Max: 1 00:22:53.384 Min: 1 00:22:53.384 Completion Queue Entry Size 00:22:53.384 Max: 1 00:22:53.384 Min: 1 00:22:53.384 Number of Namespaces: 0 00:22:53.384 Compare Command: Not Supported 00:22:53.384 Write Uncorrectable Command: Not Supported 00:22:53.384 Dataset Management Command: Not Supported 00:22:53.384 Write Zeroes Command: Not Supported 00:22:53.384 Set Features Save Field: Not Supported 00:22:53.384 Reservations: Not Supported 00:22:53.384 Timestamp: Not Supported 00:22:53.384 Copy: Not Supported 00:22:53.384 Volatile Write Cache: Not Present 00:22:53.384 Atomic Write Unit (Normal): 1 00:22:53.384 Atomic Write Unit (PFail): 1 00:22:53.384 Atomic Compare & Write Unit: 1 00:22:53.384 Fused Compare & Write: Supported 00:22:53.384 Scatter-Gather List 00:22:53.384 SGL Command Set: Supported 00:22:53.384 SGL Keyed: Supported 00:22:53.384 SGL Bit Bucket Descriptor: Not Supported 00:22:53.384 SGL Metadata Pointer: Not Supported 00:22:53.384 Oversized SGL: Not Supported 00:22:53.384 SGL Metadata Address: Not Supported 00:22:53.384 SGL Offset: Supported 00:22:53.384 Transport SGL Data Block: Not Supported 00:22:53.384 Replay Protected Memory Block: Not Supported 00:22:53.384 00:22:53.384 Firmware Slot Information 00:22:53.384 ========================= 00:22:53.384 Active slot: 0 00:22:53.384 00:22:53.384 00:22:53.384 Error Log 00:22:53.384 ========= 00:22:53.384 00:22:53.384 Active Namespaces 00:22:53.384 ================= 00:22:53.384 Discovery Log Page 00:22:53.384 ================== 00:22:53.384 Generation Counter: 2 00:22:53.384 Number of Records: 2 00:22:53.384 Record Format: 0 00:22:53.384 00:22:53.384 Discovery Log Entry 0 00:22:53.384 ---------------------- 00:22:53.384 Transport Type: 3 (TCP) 00:22:53.384 Address Family: 1 (IPv4) 00:22:53.384 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:53.384 Entry Flags: 00:22:53.384 Duplicate Returned Information: 1 00:22:53.384 Explicit Persistent Connection Support for Discovery: 1 00:22:53.384 Transport Requirements: 00:22:53.384 Secure Channel: Not Required 00:22:53.384 Port ID: 0 (0x0000) 00:22:53.384 Controller ID: 65535 (0xffff) 00:22:53.384 Admin Max SQ Size: 128 00:22:53.384 Transport Service Identifier: 4420 00:22:53.384 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:53.384 Transport Address: 10.0.0.2 00:22:53.384 Discovery Log Entry 1 00:22:53.384 ---------------------- 00:22:53.384 Transport Type: 3 (TCP) 00:22:53.384 Address Family: 1 (IPv4) 00:22:53.384 Subsystem Type: 2 (NVM Subsystem) 00:22:53.384 Entry Flags: 00:22:53.384 Duplicate Returned Information: 0 00:22:53.384 Explicit Persistent Connection Support for Discovery: 0 00:22:53.384 Transport Requirements: 00:22:53.384 Secure Channel: Not Required 00:22:53.384 Port ID: 0 (0x0000) 00:22:53.384 Controller ID: 65535 (0xffff) 00:22:53.384 Admin Max SQ Size: 128 00:22:53.384 Transport Service Identifier: 4420 00:22:53.384 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:53.385 Transport Address: 10.0.0.2 [2024-07-24 22:10:32.356605] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:22:53.385 [2024-07-24 22:10:32.356617] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6c2e40) on tqpair=0x657f00 00:22:53.385 [2024-07-24 22:10:32.356624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.385 [2024-07-24 22:10:32.356631] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6c2fc0) on tqpair=0x657f00 00:22:53.385 [2024-07-24 22:10:32.356636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.385 [2024-07-24 22:10:32.356642] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6c3140) on tqpair=0x657f00 00:22:53.385 [2024-07-24 22:10:32.356648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.385 [2024-07-24 22:10:32.356654] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6c32c0) on tqpair=0x657f00 00:22:53.385 [2024-07-24 22:10:32.356659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.385 [2024-07-24 22:10:32.356670] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.385 [2024-07-24 22:10:32.356675] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.385 [2024-07-24 22:10:32.356680] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x657f00) 00:22:53.385 [2024-07-24 22:10:32.356687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.385 [2024-07-24 22:10:32.356702] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6c32c0, cid 3, qid 0 00:22:53.385 [2024-07-24 22:10:32.356817] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.385 [2024-07-24 22:10:32.356824] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.385 [2024-07-24 22:10:32.356829] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.385 [2024-07-24 22:10:32.356833] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6c32c0) on tqpair=0x657f00 00:22:53.385 [2024-07-24 22:10:32.356841] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.385 [2024-07-24 22:10:32.356846] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.385 [2024-07-24 22:10:32.356850] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x657f00) 00:22:53.385 [2024-07-24 22:10:32.356857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.385 [2024-07-24 22:10:32.356873] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6c32c0, cid 3, qid 0 00:22:53.385 [2024-07-24 22:10:32.356990] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.385 [2024-07-24 22:10:32.356997] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.385 [2024-07-24 22:10:32.357001] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.385 [2024-07-24 22:10:32.357006] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6c32c0) on tqpair=0x657f00 00:22:53.385 [2024-07-24 22:10:32.357011] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:22:53.385 [2024-07-24 22:10:32.357017] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:22:53.385 [2024-07-24 22:10:32.357028] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.385 [2024-07-24 22:10:32.357033] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.385 [2024-07-24 22:10:32.357038] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x657f00) 00:22:53.385 [2024-07-24 22:10:32.357045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.385 [2024-07-24 22:10:32.357056] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6c32c0, cid 3, qid 0 00:22:53.385 [2024-07-24 22:10:32.357145] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.385 [2024-07-24 22:10:32.357152] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.385 [2024-07-24 22:10:32.357156] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.385 [2024-07-24 22:10:32.357161] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6c32c0) on tqpair=0x657f00 00:22:53.385 [2024-07-24 22:10:32.357171] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.385 [2024-07-24 22:10:32.357175] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.385 [2024-07-24 22:10:32.357180] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x657f00) 00:22:53.385 [2024-07-24 22:10:32.357187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.385 [2024-07-24 22:10:32.357198] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6c32c0, cid 3, qid 0 00:22:53.385 [2024-07-24 22:10:32.357287] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.385 [2024-07-24 22:10:32.357294] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.385 [2024-07-24 22:10:32.357298] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.385 [2024-07-24 22:10:32.357303] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6c32c0) on tqpair=0x657f00 00:22:53.385 [2024-07-24 22:10:32.357312] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.385 [2024-07-24 22:10:32.357317] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.385 [2024-07-24 22:10:32.357322] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x657f00) 00:22:53.385 [2024-07-24 22:10:32.357328] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.385 [2024-07-24 22:10:32.357339] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6c32c0, cid 3, qid 0 00:22:53.385 [2024-07-24 22:10:32.357509] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.385 [2024-07-24 22:10:32.357516] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.385 [2024-07-24 22:10:32.357520] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.385 [2024-07-24 22:10:32.357525] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6c32c0) on tqpair=0x657f00 00:22:53.385 [2024-07-24 22:10:32.357535] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.385 [2024-07-24 22:10:32.357540] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.385 [2024-07-24 22:10:32.357544] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x657f00) 00:22:53.385 [2024-07-24 22:10:32.357551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.385 [2024-07-24 22:10:32.357565] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6c32c0, cid 3, qid 0 00:22:53.385 [2024-07-24 22:10:32.357651] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.385 [2024-07-24 22:10:32.357658] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.385 [2024-07-24 22:10:32.357662] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.385 [2024-07-24 22:10:32.357667] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6c32c0) on tqpair=0x657f00 00:22:53.385 [2024-07-24 22:10:32.357676] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.385 [2024-07-24 22:10:32.357681] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.385 [2024-07-24 22:10:32.357685] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x657f00) 00:22:53.385 [2024-07-24 22:10:32.357692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.385 [2024-07-24 22:10:32.357703] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6c32c0, cid 3, qid 0 00:22:53.385 [2024-07-24 22:10:32.357795] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.385 [2024-07-24 22:10:32.357802] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.385 [2024-07-24 22:10:32.357807] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.385 [2024-07-24 22:10:32.357812] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6c32c0) on tqpair=0x657f00 00:22:53.385 [2024-07-24 22:10:32.357822] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.385 [2024-07-24 22:10:32.357826] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.385 [2024-07-24 22:10:32.357831] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x657f00) 00:22:53.385 [2024-07-24 22:10:32.357838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.385 [2024-07-24 22:10:32.357849] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6c32c0, cid 3, qid 0 00:22:53.385 [2024-07-24 22:10:32.357940] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.385 [2024-07-24 22:10:32.357946] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.385 [2024-07-24 22:10:32.357951] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.385 [2024-07-24 22:10:32.357956] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6c32c0) on tqpair=0x657f00 00:22:53.385 [2024-07-24 22:10:32.357965] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.385 [2024-07-24 22:10:32.357969] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.385 [2024-07-24 22:10:32.357974] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x657f00) 00:22:53.385 [2024-07-24 22:10:32.357981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.385 [2024-07-24 22:10:32.357992] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6c32c0, cid 3, qid 0 00:22:53.385 [2024-07-24 22:10:32.358077] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.385 [2024-07-24 22:10:32.358084] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.385 [2024-07-24 22:10:32.358088] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.385 [2024-07-24 22:10:32.358093] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6c32c0) on tqpair=0x657f00 00:22:53.385 [2024-07-24 22:10:32.358102] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.385 [2024-07-24 22:10:32.358107] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.385 [2024-07-24 22:10:32.358112] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x657f00) 00:22:53.385 [2024-07-24 22:10:32.358118] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.385 [2024-07-24 22:10:32.358130] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6c32c0, cid 3, qid 0 00:22:53.385 [2024-07-24 22:10:32.358218] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.385 [2024-07-24 22:10:32.358224] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.385 [2024-07-24 22:10:32.358229] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.385 [2024-07-24 22:10:32.358233] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6c32c0) on tqpair=0x657f00 00:22:53.385 [2024-07-24 22:10:32.358243] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.385 [2024-07-24 22:10:32.358248] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.386 [2024-07-24 22:10:32.358252] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x657f00) 00:22:53.386 [2024-07-24 22:10:32.358259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.386 [2024-07-24 22:10:32.358270] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6c32c0, cid 3, qid 0 00:22:53.386 [2024-07-24 22:10:32.358358] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.386 [2024-07-24 22:10:32.358365] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.386 [2024-07-24 22:10:32.358369] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.386 [2024-07-24 22:10:32.358374] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6c32c0) on tqpair=0x657f00 00:22:53.386 [2024-07-24 22:10:32.358384] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.386 [2024-07-24 22:10:32.358389] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.386 [2024-07-24 22:10:32.358393] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x657f00) 00:22:53.386 [2024-07-24 22:10:32.358400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.386 [2024-07-24 22:10:32.358411] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6c32c0, cid 3, qid 0 00:22:53.386 [2024-07-24 22:10:32.358496] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.386 [2024-07-24 22:10:32.358502] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.386 [2024-07-24 22:10:32.358507] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.386 [2024-07-24 22:10:32.358512] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6c32c0) on tqpair=0x657f00 00:22:53.386 [2024-07-24 22:10:32.358521] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.386 [2024-07-24 22:10:32.358525] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.386 [2024-07-24 22:10:32.358530] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x657f00) 00:22:53.386 [2024-07-24 22:10:32.358537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.386 [2024-07-24 22:10:32.358548] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6c32c0, cid 3, qid 0 00:22:53.386 [2024-07-24 22:10:32.358636] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.386 [2024-07-24 22:10:32.358642] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.386 [2024-07-24 22:10:32.358647] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.386 [2024-07-24 22:10:32.358651] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6c32c0) on tqpair=0x657f00 00:22:53.386 [2024-07-24 22:10:32.358661] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.386 [2024-07-24 22:10:32.358666] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.386 [2024-07-24 22:10:32.358670] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x657f00) 00:22:53.386 [2024-07-24 22:10:32.358677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.386 [2024-07-24 22:10:32.358688] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6c32c0, cid 3, qid 0 00:22:53.386 [2024-07-24 22:10:32.362726] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.386 [2024-07-24 22:10:32.362739] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.386 [2024-07-24 22:10:32.362744] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.386 [2024-07-24 22:10:32.362749] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6c32c0) on tqpair=0x657f00 00:22:53.386 [2024-07-24 22:10:32.362759] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.386 [2024-07-24 22:10:32.362764] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.386 [2024-07-24 22:10:32.362769] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x657f00) 00:22:53.386 [2024-07-24 22:10:32.362776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.386 [2024-07-24 22:10:32.362790] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6c32c0, cid 3, qid 0 00:22:53.386 [2024-07-24 22:10:32.362887] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.386 [2024-07-24 22:10:32.362895] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.386 [2024-07-24 22:10:32.362900] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.386 [2024-07-24 22:10:32.362905] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6c32c0) on tqpair=0x657f00 00:22:53.386 [2024-07-24 22:10:32.362914] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:22:53.386 00:22:53.386 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:53.386 [2024-07-24 22:10:32.402787] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:22:53.386 [2024-07-24 22:10:32.402834] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2770417 ] 00:22:53.386 EAL: No free 2048 kB hugepages reported on node 1 00:22:53.386 [2024-07-24 22:10:32.433741] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:22:53.386 [2024-07-24 22:10:32.433784] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:53.386 [2024-07-24 22:10:32.433790] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:53.386 [2024-07-24 22:10:32.433800] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:53.386 [2024-07-24 22:10:32.433808] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:53.386 [2024-07-24 22:10:32.434195] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:22:53.386 [2024-07-24 22:10:32.434216] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x8adf00 0 00:22:53.386 [2024-07-24 22:10:32.448722] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:53.386 [2024-07-24 22:10:32.448740] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:53.386 [2024-07-24 22:10:32.448745] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:53.386 [2024-07-24 22:10:32.448750] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:53.386 [2024-07-24 22:10:32.448784] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.386 [2024-07-24 22:10:32.448790] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.386 [2024-07-24 22:10:32.448795] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8adf00) 00:22:53.386 [2024-07-24 22:10:32.448805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:53.386 [2024-07-24 22:10:32.448824] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x918e40, cid 0, qid 0 00:22:53.386 [2024-07-24 22:10:32.456724] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.386 [2024-07-24 22:10:32.456733] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.386 [2024-07-24 22:10:32.456738] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.386 [2024-07-24 22:10:32.456743] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x918e40) on tqpair=0x8adf00 00:22:53.386 [2024-07-24 22:10:32.456755] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:53.386 [2024-07-24 22:10:32.456762] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:22:53.386 [2024-07-24 22:10:32.456768] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:22:53.386 [2024-07-24 22:10:32.456780] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.386 [2024-07-24 22:10:32.456785] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.386 [2024-07-24 22:10:32.456790] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8adf00) 00:22:53.386 [2024-07-24 22:10:32.456798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.386 [2024-07-24 22:10:32.456812] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x918e40, cid 0, qid 0 00:22:53.386 [2024-07-24 22:10:32.457007] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.386 [2024-07-24 22:10:32.457014] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.386 [2024-07-24 22:10:32.457019] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.386 [2024-07-24 22:10:32.457024] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x918e40) on tqpair=0x8adf00 00:22:53.386 [2024-07-24 22:10:32.457032] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:22:53.386 [2024-07-24 22:10:32.457041] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:22:53.386 [2024-07-24 22:10:32.457049] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.386 [2024-07-24 22:10:32.457054] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.386 [2024-07-24 22:10:32.457059] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8adf00) 00:22:53.386 [2024-07-24 22:10:32.457066] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.386 [2024-07-24 22:10:32.457079] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x918e40, cid 0, qid 0 00:22:53.386 [2024-07-24 22:10:32.457171] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.386 [2024-07-24 22:10:32.457178] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.386 [2024-07-24 22:10:32.457182] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.386 [2024-07-24 22:10:32.457187] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x918e40) on tqpair=0x8adf00 00:22:53.386 [2024-07-24 22:10:32.457193] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:22:53.386 [2024-07-24 22:10:32.457203] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:22:53.386 [2024-07-24 22:10:32.457210] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.386 [2024-07-24 22:10:32.457215] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.386 [2024-07-24 22:10:32.457220] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8adf00) 00:22:53.386 [2024-07-24 22:10:32.457227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.386 [2024-07-24 22:10:32.457241] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x918e40, cid 0, qid 0 00:22:53.386 [2024-07-24 22:10:32.457337] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.386 [2024-07-24 22:10:32.457343] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.386 [2024-07-24 22:10:32.457348] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.387 [2024-07-24 22:10:32.457353] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x918e40) on tqpair=0x8adf00 00:22:53.387 [2024-07-24 22:10:32.457358] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:53.387 [2024-07-24 22:10:32.457369] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.387 [2024-07-24 22:10:32.457374] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.387 [2024-07-24 22:10:32.457379] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8adf00) 00:22:53.387 [2024-07-24 22:10:32.457386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.387 [2024-07-24 22:10:32.457398] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x918e40, cid 0, qid 0 00:22:53.387 [2024-07-24 22:10:32.457483] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.387 [2024-07-24 22:10:32.457490] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.387 [2024-07-24 22:10:32.457494] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.387 [2024-07-24 22:10:32.457499] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x918e40) on tqpair=0x8adf00 00:22:53.387 [2024-07-24 22:10:32.457504] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:22:53.387 [2024-07-24 22:10:32.457510] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:22:53.387 [2024-07-24 22:10:32.457520] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:53.387 [2024-07-24 22:10:32.457626] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:22:53.387 [2024-07-24 22:10:32.457631] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:53.387 [2024-07-24 22:10:32.457639] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.387 [2024-07-24 22:10:32.457644] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.387 [2024-07-24 22:10:32.457649] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8adf00) 00:22:53.387 [2024-07-24 22:10:32.457655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.387 [2024-07-24 22:10:32.457667] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x918e40, cid 0, qid 0 00:22:53.387 [2024-07-24 22:10:32.457757] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.387 [2024-07-24 22:10:32.457765] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.387 [2024-07-24 22:10:32.457769] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.387 [2024-07-24 22:10:32.457774] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x918e40) on tqpair=0x8adf00 00:22:53.387 [2024-07-24 22:10:32.457780] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:53.387 [2024-07-24 22:10:32.457790] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.387 [2024-07-24 22:10:32.457795] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.387 [2024-07-24 22:10:32.457800] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8adf00) 00:22:53.387 [2024-07-24 22:10:32.457807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.387 [2024-07-24 22:10:32.457821] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x918e40, cid 0, qid 0 00:22:53.387 [2024-07-24 22:10:32.457906] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.387 [2024-07-24 22:10:32.457913] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.387 [2024-07-24 22:10:32.457918] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.387 [2024-07-24 22:10:32.457923] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x918e40) on tqpair=0x8adf00 00:22:53.387 [2024-07-24 22:10:32.457928] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:53.387 [2024-07-24 22:10:32.457934] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:22:53.387 [2024-07-24 22:10:32.457943] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:22:53.387 [2024-07-24 22:10:32.457956] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:22:53.387 [2024-07-24 22:10:32.457965] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.387 [2024-07-24 22:10:32.457970] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8adf00) 00:22:53.387 [2024-07-24 22:10:32.457977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.387 [2024-07-24 22:10:32.457989] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x918e40, cid 0, qid 0 00:22:53.387 [2024-07-24 22:10:32.458105] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:53.387 [2024-07-24 22:10:32.458113] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:53.387 [2024-07-24 22:10:32.458117] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:53.387 [2024-07-24 22:10:32.458122] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8adf00): datao=0, datal=4096, cccid=0 00:22:53.387 [2024-07-24 22:10:32.458128] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x918e40) on tqpair(0x8adf00): expected_datao=0, payload_size=4096 00:22:53.387 [2024-07-24 22:10:32.458134] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.387 [2024-07-24 22:10:32.458242] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:53.387 [2024-07-24 22:10:32.458247] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:53.387 [2024-07-24 22:10:32.498891] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.387 [2024-07-24 22:10:32.498902] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.387 [2024-07-24 22:10:32.498907] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.387 [2024-07-24 22:10:32.498912] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x918e40) on tqpair=0x8adf00 00:22:53.387 [2024-07-24 22:10:32.498921] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:22:53.387 [2024-07-24 22:10:32.498927] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:22:53.387 [2024-07-24 22:10:32.498933] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:22:53.387 [2024-07-24 22:10:32.498939] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:22:53.387 [2024-07-24 22:10:32.498945] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:22:53.387 [2024-07-24 22:10:32.498951] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:22:53.387 [2024-07-24 22:10:32.498961] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:22:53.387 [2024-07-24 22:10:32.498975] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.387 [2024-07-24 22:10:32.498980] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.387 [2024-07-24 22:10:32.498985] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8adf00) 00:22:53.387 [2024-07-24 22:10:32.498993] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:53.387 [2024-07-24 22:10:32.499007] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x918e40, cid 0, qid 0 00:22:53.387 [2024-07-24 22:10:32.499093] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.387 [2024-07-24 22:10:32.499100] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.387 [2024-07-24 22:10:32.499104] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.387 [2024-07-24 22:10:32.499109] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x918e40) on tqpair=0x8adf00 00:22:53.387 [2024-07-24 22:10:32.499116] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.387 [2024-07-24 22:10:32.499121] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.387 [2024-07-24 22:10:32.499126] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8adf00) 00:22:53.387 [2024-07-24 22:10:32.499132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.387 [2024-07-24 22:10:32.499139] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.387 [2024-07-24 22:10:32.499144] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.387 [2024-07-24 22:10:32.499149] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x8adf00) 00:22:53.387 [2024-07-24 22:10:32.499155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.387 [2024-07-24 22:10:32.499162] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.387 [2024-07-24 22:10:32.499167] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.387 [2024-07-24 22:10:32.499171] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x8adf00) 00:22:53.387 [2024-07-24 22:10:32.499177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.387 [2024-07-24 22:10:32.499184] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.387 [2024-07-24 22:10:32.499189] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.387 [2024-07-24 22:10:32.499194] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8adf00) 00:22:53.387 [2024-07-24 22:10:32.499200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.387 [2024-07-24 22:10:32.499206] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:53.387 [2024-07-24 22:10:32.499219] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:53.387 [2024-07-24 22:10:32.499226] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.387 [2024-07-24 22:10:32.499231] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8adf00) 00:22:53.387 [2024-07-24 22:10:32.499238] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.387 [2024-07-24 22:10:32.499251] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x918e40, cid 0, qid 0 00:22:53.387 [2024-07-24 22:10:32.499257] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x918fc0, cid 1, qid 0 00:22:53.387 [2024-07-24 22:10:32.499262] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x919140, cid 2, qid 0 00:22:53.387 [2024-07-24 22:10:32.499268] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9192c0, cid 3, qid 0 00:22:53.387 [2024-07-24 22:10:32.499275] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x919440, cid 4, qid 0 00:22:53.387 [2024-07-24 22:10:32.499389] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.388 [2024-07-24 22:10:32.499396] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.388 [2024-07-24 22:10:32.499400] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.388 [2024-07-24 22:10:32.499405] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x919440) on tqpair=0x8adf00 00:22:53.388 [2024-07-24 22:10:32.499411] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:22:53.388 [2024-07-24 22:10:32.499417] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:53.388 [2024-07-24 22:10:32.499429] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:22:53.388 [2024-07-24 22:10:32.499437] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:53.388 [2024-07-24 22:10:32.499445] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.388 [2024-07-24 22:10:32.499449] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.388 [2024-07-24 22:10:32.499454] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8adf00) 00:22:53.388 [2024-07-24 22:10:32.499461] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:53.388 [2024-07-24 22:10:32.499473] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x919440, cid 4, qid 0 00:22:53.388 [2024-07-24 22:10:32.499560] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.388 [2024-07-24 22:10:32.499567] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.388 [2024-07-24 22:10:32.499572] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.388 [2024-07-24 22:10:32.499576] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x919440) on tqpair=0x8adf00 00:22:53.388 [2024-07-24 22:10:32.499628] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:22:53.388 [2024-07-24 22:10:32.499639] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:53.388 [2024-07-24 22:10:32.499647] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.388 [2024-07-24 22:10:32.499652] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8adf00) 00:22:53.388 [2024-07-24 22:10:32.499659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.388 [2024-07-24 22:10:32.499671] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x919440, cid 4, qid 0 00:22:53.388 [2024-07-24 22:10:32.499774] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:53.388 [2024-07-24 22:10:32.499782] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:53.388 [2024-07-24 22:10:32.499787] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:53.388 [2024-07-24 22:10:32.499791] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8adf00): datao=0, datal=4096, cccid=4 00:22:53.388 [2024-07-24 22:10:32.499797] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x919440) on tqpair(0x8adf00): expected_datao=0, payload_size=4096 00:22:53.388 [2024-07-24 22:10:32.499803] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.388 [2024-07-24 22:10:32.499810] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:53.388 [2024-07-24 22:10:32.499815] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:53.388 [2024-07-24 22:10:32.499851] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.388 [2024-07-24 22:10:32.499858] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.388 [2024-07-24 22:10:32.499864] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.388 [2024-07-24 22:10:32.499869] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x919440) on tqpair=0x8adf00 00:22:53.388 [2024-07-24 22:10:32.499879] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:22:53.388 [2024-07-24 22:10:32.499890] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:22:53.388 [2024-07-24 22:10:32.499901] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:22:53.388 [2024-07-24 22:10:32.499909] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.388 [2024-07-24 22:10:32.499914] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8adf00) 00:22:53.388 [2024-07-24 22:10:32.499921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.388 [2024-07-24 22:10:32.499934] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x919440, cid 4, qid 0 00:22:53.388 [2024-07-24 22:10:32.500041] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:53.388 [2024-07-24 22:10:32.500048] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:53.388 [2024-07-24 22:10:32.500053] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:53.388 [2024-07-24 22:10:32.500057] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8adf00): datao=0, datal=4096, cccid=4 00:22:53.388 [2024-07-24 22:10:32.500063] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x919440) on tqpair(0x8adf00): expected_datao=0, payload_size=4096 00:22:53.388 [2024-07-24 22:10:32.500069] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.388 [2024-07-24 22:10:32.500076] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:53.388 [2024-07-24 22:10:32.500081] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:53.388 [2024-07-24 22:10:32.500178] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.388 [2024-07-24 22:10:32.500185] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.388 [2024-07-24 22:10:32.500189] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.388 [2024-07-24 22:10:32.500194] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x919440) on tqpair=0x8adf00 00:22:53.388 [2024-07-24 22:10:32.500206] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:53.388 [2024-07-24 22:10:32.500216] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:53.388 [2024-07-24 22:10:32.500225] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.388 [2024-07-24 22:10:32.500229] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8adf00) 00:22:53.388 [2024-07-24 22:10:32.500237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.388 [2024-07-24 22:10:32.500249] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x919440, cid 4, qid 0 00:22:53.388 [2024-07-24 22:10:32.500346] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:53.388 [2024-07-24 22:10:32.500353] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:53.388 [2024-07-24 22:10:32.500358] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:53.388 [2024-07-24 22:10:32.500362] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8adf00): datao=0, datal=4096, cccid=4 00:22:53.388 [2024-07-24 22:10:32.500368] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x919440) on tqpair(0x8adf00): expected_datao=0, payload_size=4096 00:22:53.388 [2024-07-24 22:10:32.500374] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.388 [2024-07-24 22:10:32.500383] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:53.388 [2024-07-24 22:10:32.500387] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:53.388 [2024-07-24 22:10:32.500423] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.388 [2024-07-24 22:10:32.500429] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.388 [2024-07-24 22:10:32.500434] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.388 [2024-07-24 22:10:32.500439] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x919440) on tqpair=0x8adf00 00:22:53.388 [2024-07-24 22:10:32.500446] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:53.388 [2024-07-24 22:10:32.500456] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:22:53.388 [2024-07-24 22:10:32.500466] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:22:53.388 [2024-07-24 22:10:32.500474] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:53.388 [2024-07-24 22:10:32.500481] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:53.388 [2024-07-24 22:10:32.500487] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:22:53.388 [2024-07-24 22:10:32.500493] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:22:53.389 [2024-07-24 22:10:32.500499] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:22:53.389 [2024-07-24 22:10:32.500505] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:22:53.389 [2024-07-24 22:10:32.500519] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.389 [2024-07-24 22:10:32.500524] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8adf00) 00:22:53.389 [2024-07-24 22:10:32.500531] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.389 [2024-07-24 22:10:32.500539] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.389 [2024-07-24 22:10:32.500543] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.389 [2024-07-24 22:10:32.500548] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8adf00) 00:22:53.389 [2024-07-24 22:10:32.500554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.389 [2024-07-24 22:10:32.500569] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x919440, cid 4, qid 0 00:22:53.389 [2024-07-24 22:10:32.500575] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9195c0, cid 5, qid 0 00:22:53.389 [2024-07-24 22:10:32.500682] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.389 [2024-07-24 22:10:32.500689] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.389 [2024-07-24 22:10:32.500694] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.389 [2024-07-24 22:10:32.500699] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x919440) on tqpair=0x8adf00 00:22:53.389 [2024-07-24 22:10:32.500705] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.389 [2024-07-24 22:10:32.500712] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.389 [2024-07-24 22:10:32.504722] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.389 [2024-07-24 22:10:32.504727] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9195c0) on tqpair=0x8adf00 00:22:53.389 [2024-07-24 22:10:32.504739] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.389 [2024-07-24 22:10:32.504744] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8adf00) 00:22:53.389 [2024-07-24 22:10:32.504755] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.389 [2024-07-24 22:10:32.504769] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9195c0, cid 5, qid 0 00:22:53.389 [2024-07-24 22:10:32.505036] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.389 [2024-07-24 22:10:32.505043] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.389 [2024-07-24 22:10:32.505047] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.389 [2024-07-24 22:10:32.505052] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9195c0) on tqpair=0x8adf00 00:22:53.389 [2024-07-24 22:10:32.505063] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.389 [2024-07-24 22:10:32.505067] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8adf00) 00:22:53.389 [2024-07-24 22:10:32.505074] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.389 [2024-07-24 22:10:32.505086] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9195c0, cid 5, qid 0 00:22:53.389 [2024-07-24 22:10:32.505235] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.389 [2024-07-24 22:10:32.505242] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.389 [2024-07-24 22:10:32.505246] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.389 [2024-07-24 22:10:32.505251] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9195c0) on tqpair=0x8adf00 00:22:53.389 [2024-07-24 22:10:32.505261] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.389 [2024-07-24 22:10:32.505266] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8adf00) 00:22:53.389 [2024-07-24 22:10:32.505273] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.389 [2024-07-24 22:10:32.505285] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9195c0, cid 5, qid 0 00:22:53.389 [2024-07-24 22:10:32.505383] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.389 [2024-07-24 22:10:32.505390] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.389 [2024-07-24 22:10:32.505395] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.389 [2024-07-24 22:10:32.505399] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9195c0) on tqpair=0x8adf00 00:22:53.389 [2024-07-24 22:10:32.505415] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.389 [2024-07-24 22:10:32.505421] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8adf00) 00:22:53.389 [2024-07-24 22:10:32.505428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.389 [2024-07-24 22:10:32.505435] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.389 [2024-07-24 22:10:32.505440] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8adf00) 00:22:53.389 [2024-07-24 22:10:32.505447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.389 [2024-07-24 22:10:32.505455] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.389 [2024-07-24 22:10:32.505459] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x8adf00) 00:22:53.389 [2024-07-24 22:10:32.505466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.389 [2024-07-24 22:10:32.505474] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.389 [2024-07-24 22:10:32.505479] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x8adf00) 00:22:53.389 [2024-07-24 22:10:32.505487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.389 [2024-07-24 22:10:32.505500] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9195c0, cid 5, qid 0 00:22:53.389 [2024-07-24 22:10:32.505505] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x919440, cid 4, qid 0 00:22:53.389 [2024-07-24 22:10:32.505511] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x919740, cid 6, qid 0 00:22:53.389 [2024-07-24 22:10:32.505516] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9198c0, cid 7, qid 0 00:22:53.389 [2024-07-24 22:10:32.505674] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:53.389 [2024-07-24 22:10:32.505682] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:53.389 [2024-07-24 22:10:32.505686] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:53.389 [2024-07-24 22:10:32.505691] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8adf00): datao=0, datal=8192, cccid=5 00:22:53.389 [2024-07-24 22:10:32.505697] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9195c0) on tqpair(0x8adf00): expected_datao=0, payload_size=8192 00:22:53.389 [2024-07-24 22:10:32.505703] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.389 [2024-07-24 22:10:32.505892] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:53.389 [2024-07-24 22:10:32.505898] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:53.389 [2024-07-24 22:10:32.505904] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:53.389 [2024-07-24 22:10:32.505911] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:53.389 [2024-07-24 22:10:32.505915] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:53.389 [2024-07-24 22:10:32.505920] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8adf00): datao=0, datal=512, cccid=4 00:22:53.389 [2024-07-24 22:10:32.505926] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x919440) on tqpair(0x8adf00): expected_datao=0, payload_size=512 00:22:53.389 [2024-07-24 22:10:32.505931] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.389 [2024-07-24 22:10:32.505938] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:53.389 [2024-07-24 22:10:32.505943] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:53.389 [2024-07-24 22:10:32.505949] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:53.389 [2024-07-24 22:10:32.505955] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:53.389 [2024-07-24 22:10:32.505959] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:53.389 [2024-07-24 22:10:32.505964] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8adf00): datao=0, datal=512, cccid=6 00:22:53.389 [2024-07-24 22:10:32.505970] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x919740) on tqpair(0x8adf00): expected_datao=0, payload_size=512 00:22:53.389 [2024-07-24 22:10:32.505976] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.389 [2024-07-24 22:10:32.505982] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:53.389 [2024-07-24 22:10:32.505987] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:53.389 [2024-07-24 22:10:32.505993] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:53.389 [2024-07-24 22:10:32.505999] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:53.389 [2024-07-24 22:10:32.506004] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:53.389 [2024-07-24 22:10:32.506008] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8adf00): datao=0, datal=4096, cccid=7 00:22:53.389 [2024-07-24 22:10:32.506014] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9198c0) on tqpair(0x8adf00): expected_datao=0, payload_size=4096 00:22:53.389 [2024-07-24 22:10:32.506020] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.389 [2024-07-24 22:10:32.506027] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:53.389 [2024-07-24 22:10:32.506031] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:53.389 [2024-07-24 22:10:32.506042] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.389 [2024-07-24 22:10:32.506048] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.389 [2024-07-24 22:10:32.506053] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.389 [2024-07-24 22:10:32.506058] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9195c0) on tqpair=0x8adf00 00:22:53.389 [2024-07-24 22:10:32.506070] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.389 [2024-07-24 22:10:32.506076] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.389 [2024-07-24 22:10:32.506081] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.389 [2024-07-24 22:10:32.506086] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x919440) on tqpair=0x8adf00 00:22:53.389 [2024-07-24 22:10:32.506097] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.389 [2024-07-24 22:10:32.506103] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.389 [2024-07-24 22:10:32.506107] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.389 [2024-07-24 22:10:32.506112] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x919740) on tqpair=0x8adf00 00:22:53.389 [2024-07-24 22:10:32.506120] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.390 [2024-07-24 22:10:32.506126] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.390 [2024-07-24 22:10:32.506131] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.390 [2024-07-24 22:10:32.506135] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9198c0) on tqpair=0x8adf00 00:22:53.390 ===================================================== 00:22:53.390 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:53.390 ===================================================== 00:22:53.390 Controller Capabilities/Features 00:22:53.390 ================================ 00:22:53.390 Vendor ID: 8086 00:22:53.390 Subsystem Vendor ID: 8086 00:22:53.390 Serial Number: SPDK00000000000001 00:22:53.390 Model Number: SPDK bdev Controller 00:22:53.390 Firmware Version: 24.09 00:22:53.390 Recommended Arb Burst: 6 00:22:53.390 IEEE OUI Identifier: e4 d2 5c 00:22:53.390 Multi-path I/O 00:22:53.390 May have multiple subsystem ports: Yes 00:22:53.390 May have multiple controllers: Yes 00:22:53.390 Associated with SR-IOV VF: No 00:22:53.390 Max Data Transfer Size: 131072 00:22:53.390 Max Number of Namespaces: 32 00:22:53.390 Max Number of I/O Queues: 127 00:22:53.390 NVMe Specification Version (VS): 1.3 00:22:53.390 NVMe Specification Version (Identify): 1.3 00:22:53.390 Maximum Queue Entries: 128 00:22:53.390 Contiguous Queues Required: Yes 00:22:53.390 Arbitration Mechanisms Supported 00:22:53.390 Weighted Round Robin: Not Supported 00:22:53.390 Vendor Specific: Not Supported 00:22:53.390 Reset Timeout: 15000 ms 00:22:53.390 Doorbell Stride: 4 bytes 00:22:53.390 NVM Subsystem Reset: Not Supported 00:22:53.390 Command Sets Supported 00:22:53.390 NVM Command Set: Supported 00:22:53.390 Boot Partition: Not Supported 00:22:53.390 Memory Page Size Minimum: 4096 bytes 00:22:53.390 Memory Page Size Maximum: 4096 bytes 00:22:53.390 Persistent Memory Region: Not Supported 00:22:53.390 Optional Asynchronous Events Supported 00:22:53.390 Namespace Attribute Notices: Supported 00:22:53.390 Firmware Activation Notices: Not Supported 00:22:53.390 ANA Change Notices: Not Supported 00:22:53.390 PLE Aggregate Log Change Notices: Not Supported 00:22:53.390 LBA Status Info Alert Notices: Not Supported 00:22:53.390 EGE Aggregate Log Change Notices: Not Supported 00:22:53.390 Normal NVM Subsystem Shutdown event: Not Supported 00:22:53.390 Zone Descriptor Change Notices: Not Supported 00:22:53.390 Discovery Log Change Notices: Not Supported 00:22:53.390 Controller Attributes 00:22:53.390 128-bit Host Identifier: Supported 00:22:53.390 Non-Operational Permissive Mode: Not Supported 00:22:53.390 NVM Sets: Not Supported 00:22:53.390 Read Recovery Levels: Not Supported 00:22:53.390 Endurance Groups: Not Supported 00:22:53.390 Predictable Latency Mode: Not Supported 00:22:53.390 Traffic Based Keep ALive: Not Supported 00:22:53.390 Namespace Granularity: Not Supported 00:22:53.390 SQ Associations: Not Supported 00:22:53.390 UUID List: Not Supported 00:22:53.390 Multi-Domain Subsystem: Not Supported 00:22:53.390 Fixed Capacity Management: Not Supported 00:22:53.390 Variable Capacity Management: Not Supported 00:22:53.390 Delete Endurance Group: Not Supported 00:22:53.390 Delete NVM Set: Not Supported 00:22:53.390 Extended LBA Formats Supported: Not Supported 00:22:53.390 Flexible Data Placement Supported: Not Supported 00:22:53.390 00:22:53.390 Controller Memory Buffer Support 00:22:53.390 ================================ 00:22:53.390 Supported: No 00:22:53.390 00:22:53.390 Persistent Memory Region Support 00:22:53.390 ================================ 00:22:53.390 Supported: No 00:22:53.390 00:22:53.390 Admin Command Set Attributes 00:22:53.390 ============================ 00:22:53.390 Security Send/Receive: Not Supported 00:22:53.390 Format NVM: Not Supported 00:22:53.390 Firmware Activate/Download: Not Supported 00:22:53.390 Namespace Management: Not Supported 00:22:53.390 Device Self-Test: Not Supported 00:22:53.390 Directives: Not Supported 00:22:53.390 NVMe-MI: Not Supported 00:22:53.390 Virtualization Management: Not Supported 00:22:53.390 Doorbell Buffer Config: Not Supported 00:22:53.390 Get LBA Status Capability: Not Supported 00:22:53.390 Command & Feature Lockdown Capability: Not Supported 00:22:53.390 Abort Command Limit: 4 00:22:53.390 Async Event Request Limit: 4 00:22:53.390 Number of Firmware Slots: N/A 00:22:53.390 Firmware Slot 1 Read-Only: N/A 00:22:53.390 Firmware Activation Without Reset: N/A 00:22:53.390 Multiple Update Detection Support: N/A 00:22:53.390 Firmware Update Granularity: No Information Provided 00:22:53.390 Per-Namespace SMART Log: No 00:22:53.390 Asymmetric Namespace Access Log Page: Not Supported 00:22:53.390 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:53.390 Command Effects Log Page: Supported 00:22:53.390 Get Log Page Extended Data: Supported 00:22:53.390 Telemetry Log Pages: Not Supported 00:22:53.390 Persistent Event Log Pages: Not Supported 00:22:53.390 Supported Log Pages Log Page: May Support 00:22:53.390 Commands Supported & Effects Log Page: Not Supported 00:22:53.390 Feature Identifiers & Effects Log Page:May Support 00:22:53.390 NVMe-MI Commands & Effects Log Page: May Support 00:22:53.390 Data Area 4 for Telemetry Log: Not Supported 00:22:53.390 Error Log Page Entries Supported: 128 00:22:53.390 Keep Alive: Supported 00:22:53.390 Keep Alive Granularity: 10000 ms 00:22:53.390 00:22:53.390 NVM Command Set Attributes 00:22:53.390 ========================== 00:22:53.390 Submission Queue Entry Size 00:22:53.390 Max: 64 00:22:53.390 Min: 64 00:22:53.390 Completion Queue Entry Size 00:22:53.390 Max: 16 00:22:53.390 Min: 16 00:22:53.390 Number of Namespaces: 32 00:22:53.390 Compare Command: Supported 00:22:53.390 Write Uncorrectable Command: Not Supported 00:22:53.390 Dataset Management Command: Supported 00:22:53.390 Write Zeroes Command: Supported 00:22:53.390 Set Features Save Field: Not Supported 00:22:53.390 Reservations: Supported 00:22:53.390 Timestamp: Not Supported 00:22:53.390 Copy: Supported 00:22:53.390 Volatile Write Cache: Present 00:22:53.390 Atomic Write Unit (Normal): 1 00:22:53.390 Atomic Write Unit (PFail): 1 00:22:53.390 Atomic Compare & Write Unit: 1 00:22:53.390 Fused Compare & Write: Supported 00:22:53.390 Scatter-Gather List 00:22:53.390 SGL Command Set: Supported 00:22:53.390 SGL Keyed: Supported 00:22:53.390 SGL Bit Bucket Descriptor: Not Supported 00:22:53.390 SGL Metadata Pointer: Not Supported 00:22:53.390 Oversized SGL: Not Supported 00:22:53.390 SGL Metadata Address: Not Supported 00:22:53.390 SGL Offset: Supported 00:22:53.390 Transport SGL Data Block: Not Supported 00:22:53.390 Replay Protected Memory Block: Not Supported 00:22:53.390 00:22:53.390 Firmware Slot Information 00:22:53.390 ========================= 00:22:53.390 Active slot: 1 00:22:53.390 Slot 1 Firmware Revision: 24.09 00:22:53.390 00:22:53.390 00:22:53.390 Commands Supported and Effects 00:22:53.390 ============================== 00:22:53.390 Admin Commands 00:22:53.390 -------------- 00:22:53.390 Get Log Page (02h): Supported 00:22:53.390 Identify (06h): Supported 00:22:53.390 Abort (08h): Supported 00:22:53.390 Set Features (09h): Supported 00:22:53.390 Get Features (0Ah): Supported 00:22:53.390 Asynchronous Event Request (0Ch): Supported 00:22:53.390 Keep Alive (18h): Supported 00:22:53.390 I/O Commands 00:22:53.390 ------------ 00:22:53.390 Flush (00h): Supported LBA-Change 00:22:53.390 Write (01h): Supported LBA-Change 00:22:53.390 Read (02h): Supported 00:22:53.390 Compare (05h): Supported 00:22:53.390 Write Zeroes (08h): Supported LBA-Change 00:22:53.390 Dataset Management (09h): Supported LBA-Change 00:22:53.390 Copy (19h): Supported LBA-Change 00:22:53.390 00:22:53.390 Error Log 00:22:53.390 ========= 00:22:53.390 00:22:53.390 Arbitration 00:22:53.390 =========== 00:22:53.390 Arbitration Burst: 1 00:22:53.390 00:22:53.390 Power Management 00:22:53.390 ================ 00:22:53.390 Number of Power States: 1 00:22:53.390 Current Power State: Power State #0 00:22:53.390 Power State #0: 00:22:53.390 Max Power: 0.00 W 00:22:53.390 Non-Operational State: Operational 00:22:53.390 Entry Latency: Not Reported 00:22:53.390 Exit Latency: Not Reported 00:22:53.390 Relative Read Throughput: 0 00:22:53.390 Relative Read Latency: 0 00:22:53.390 Relative Write Throughput: 0 00:22:53.390 Relative Write Latency: 0 00:22:53.390 Idle Power: Not Reported 00:22:53.390 Active Power: Not Reported 00:22:53.390 Non-Operational Permissive Mode: Not Supported 00:22:53.390 00:22:53.390 Health Information 00:22:53.390 ================== 00:22:53.390 Critical Warnings: 00:22:53.390 Available Spare Space: OK 00:22:53.391 Temperature: OK 00:22:53.391 Device Reliability: OK 00:22:53.391 Read Only: No 00:22:53.391 Volatile Memory Backup: OK 00:22:53.391 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:53.391 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:53.391 Available Spare: 0% 00:22:53.391 Available Spare Threshold: 0% 00:22:53.391 Life Percentage Used:[2024-07-24 22:10:32.506220] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.391 [2024-07-24 22:10:32.506226] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x8adf00) 00:22:53.391 [2024-07-24 22:10:32.506233] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.391 [2024-07-24 22:10:32.506247] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9198c0, cid 7, qid 0 00:22:53.391 [2024-07-24 22:10:32.506390] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.391 [2024-07-24 22:10:32.506397] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.391 [2024-07-24 22:10:32.506401] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.391 [2024-07-24 22:10:32.506406] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9198c0) on tqpair=0x8adf00 00:22:53.391 [2024-07-24 22:10:32.506437] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:22:53.391 [2024-07-24 22:10:32.506448] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x918e40) on tqpair=0x8adf00 00:22:53.391 [2024-07-24 22:10:32.506454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.391 [2024-07-24 22:10:32.506461] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x918fc0) on tqpair=0x8adf00 00:22:53.391 [2024-07-24 22:10:32.506467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.391 [2024-07-24 22:10:32.506473] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x919140) on tqpair=0x8adf00 00:22:53.391 [2024-07-24 22:10:32.506478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.391 [2024-07-24 22:10:32.506484] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9192c0) on tqpair=0x8adf00 00:22:53.391 [2024-07-24 22:10:32.506490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.391 [2024-07-24 22:10:32.506499] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.391 [2024-07-24 22:10:32.506504] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.391 [2024-07-24 22:10:32.506508] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8adf00) 00:22:53.391 [2024-07-24 22:10:32.506517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.391 [2024-07-24 22:10:32.506531] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9192c0, cid 3, qid 0 00:22:53.391 [2024-07-24 22:10:32.506673] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.391 [2024-07-24 22:10:32.506680] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.391 [2024-07-24 22:10:32.506685] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.391 [2024-07-24 22:10:32.506690] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9192c0) on tqpair=0x8adf00 00:22:53.391 [2024-07-24 22:10:32.506697] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.391 [2024-07-24 22:10:32.506702] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.391 [2024-07-24 22:10:32.506706] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8adf00) 00:22:53.391 [2024-07-24 22:10:32.506713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.391 [2024-07-24 22:10:32.506735] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9192c0, cid 3, qid 0 00:22:53.391 [2024-07-24 22:10:32.506874] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.391 [2024-07-24 22:10:32.506881] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.391 [2024-07-24 22:10:32.506885] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.391 [2024-07-24 22:10:32.506890] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9192c0) on tqpair=0x8adf00 00:22:53.391 [2024-07-24 22:10:32.506896] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:22:53.391 [2024-07-24 22:10:32.506902] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:22:53.391 [2024-07-24 22:10:32.506912] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.391 [2024-07-24 22:10:32.506917] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.391 [2024-07-24 22:10:32.506922] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8adf00) 00:22:53.391 [2024-07-24 22:10:32.506929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.391 [2024-07-24 22:10:32.506941] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9192c0, cid 3, qid 0 00:22:53.391 [2024-07-24 22:10:32.507034] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.391 [2024-07-24 22:10:32.507041] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.391 [2024-07-24 22:10:32.507045] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.391 [2024-07-24 22:10:32.507050] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9192c0) on tqpair=0x8adf00 00:22:53.391 [2024-07-24 22:10:32.507061] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.391 [2024-07-24 22:10:32.507066] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.391 [2024-07-24 22:10:32.507070] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8adf00) 00:22:53.391 [2024-07-24 22:10:32.507077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.391 [2024-07-24 22:10:32.507089] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9192c0, cid 3, qid 0 00:22:53.391 [2024-07-24 22:10:32.507177] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.391 [2024-07-24 22:10:32.507184] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.391 [2024-07-24 22:10:32.507188] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.391 [2024-07-24 22:10:32.507193] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9192c0) on tqpair=0x8adf00 00:22:53.391 [2024-07-24 22:10:32.507202] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.391 [2024-07-24 22:10:32.507209] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.391 [2024-07-24 22:10:32.507214] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8adf00) 00:22:53.391 [2024-07-24 22:10:32.507221] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.391 [2024-07-24 22:10:32.507232] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9192c0, cid 3, qid 0 00:22:53.391 [2024-07-24 22:10:32.507327] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.391 [2024-07-24 22:10:32.507334] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.391 [2024-07-24 22:10:32.507338] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.391 [2024-07-24 22:10:32.507343] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9192c0) on tqpair=0x8adf00 00:22:53.391 [2024-07-24 22:10:32.507353] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.391 [2024-07-24 22:10:32.507358] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.391 [2024-07-24 22:10:32.507363] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8adf00) 00:22:53.391 [2024-07-24 22:10:32.507370] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.391 [2024-07-24 22:10:32.507381] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9192c0, cid 3, qid 0 00:22:53.391 [2024-07-24 22:10:32.507478] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.391 [2024-07-24 22:10:32.507485] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.391 [2024-07-24 22:10:32.507490] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.391 [2024-07-24 22:10:32.507494] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9192c0) on tqpair=0x8adf00 00:22:53.391 [2024-07-24 22:10:32.507504] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.391 [2024-07-24 22:10:32.507509] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.391 [2024-07-24 22:10:32.507513] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8adf00) 00:22:53.391 [2024-07-24 22:10:32.507520] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.391 [2024-07-24 22:10:32.507532] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9192c0, cid 3, qid 0 00:22:53.391 [2024-07-24 22:10:32.507617] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.391 [2024-07-24 22:10:32.507624] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.391 [2024-07-24 22:10:32.507628] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.391 [2024-07-24 22:10:32.507633] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9192c0) on tqpair=0x8adf00 00:22:53.391 [2024-07-24 22:10:32.507643] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.391 [2024-07-24 22:10:32.507648] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.391 [2024-07-24 22:10:32.507652] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8adf00) 00:22:53.391 [2024-07-24 22:10:32.507659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.391 [2024-07-24 22:10:32.507670] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9192c0, cid 3, qid 0 00:22:53.391 [2024-07-24 22:10:32.507783] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.391 [2024-07-24 22:10:32.507790] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.391 [2024-07-24 22:10:32.507794] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.391 [2024-07-24 22:10:32.507799] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9192c0) on tqpair=0x8adf00 00:22:53.391 [2024-07-24 22:10:32.507810] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.391 [2024-07-24 22:10:32.507815] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.391 [2024-07-24 22:10:32.507819] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8adf00) 00:22:53.391 [2024-07-24 22:10:32.507828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.391 [2024-07-24 22:10:32.507840] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9192c0, cid 3, qid 0 00:22:53.391 [2024-07-24 22:10:32.507931] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.391 [2024-07-24 22:10:32.507938] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.391 [2024-07-24 22:10:32.507943] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.391 [2024-07-24 22:10:32.507948] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9192c0) on tqpair=0x8adf00 00:22:53.392 [2024-07-24 22:10:32.507957] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.392 [2024-07-24 22:10:32.507962] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.392 [2024-07-24 22:10:32.507967] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8adf00) 00:22:53.392 [2024-07-24 22:10:32.507974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.392 [2024-07-24 22:10:32.507985] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9192c0, cid 3, qid 0 00:22:53.392 [2024-07-24 22:10:32.508083] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.392 [2024-07-24 22:10:32.508090] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.392 [2024-07-24 22:10:32.508094] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.392 [2024-07-24 22:10:32.508099] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9192c0) on tqpair=0x8adf00 00:22:53.392 [2024-07-24 22:10:32.508110] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.392 [2024-07-24 22:10:32.508115] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.392 [2024-07-24 22:10:32.508119] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8adf00) 00:22:53.392 [2024-07-24 22:10:32.508126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.392 [2024-07-24 22:10:32.508137] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9192c0, cid 3, qid 0 00:22:53.392 [2024-07-24 22:10:32.508225] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.392 [2024-07-24 22:10:32.508232] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.392 [2024-07-24 22:10:32.508236] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.392 [2024-07-24 22:10:32.508241] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9192c0) on tqpair=0x8adf00 00:22:53.392 [2024-07-24 22:10:32.508251] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.392 [2024-07-24 22:10:32.508256] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.392 [2024-07-24 22:10:32.508260] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8adf00) 00:22:53.392 [2024-07-24 22:10:32.508267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.392 [2024-07-24 22:10:32.508279] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9192c0, cid 3, qid 0 00:22:53.392 [2024-07-24 22:10:32.508385] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.392 [2024-07-24 22:10:32.508392] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.392 [2024-07-24 22:10:32.508396] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.392 [2024-07-24 22:10:32.508401] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9192c0) on tqpair=0x8adf00 00:22:53.392 [2024-07-24 22:10:32.508410] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.392 [2024-07-24 22:10:32.508416] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.392 [2024-07-24 22:10:32.508420] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8adf00) 00:22:53.392 [2024-07-24 22:10:32.508427] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.392 [2024-07-24 22:10:32.508440] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9192c0, cid 3, qid 0 00:22:53.392 [2024-07-24 22:10:32.508537] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.392 [2024-07-24 22:10:32.508544] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.392 [2024-07-24 22:10:32.508549] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.392 [2024-07-24 22:10:32.508553] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9192c0) on tqpair=0x8adf00 00:22:53.392 [2024-07-24 22:10:32.508563] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.392 [2024-07-24 22:10:32.508568] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.392 [2024-07-24 22:10:32.508572] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8adf00) 00:22:53.392 [2024-07-24 22:10:32.508579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.392 [2024-07-24 22:10:32.508591] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9192c0, cid 3, qid 0 00:22:53.392 [2024-07-24 22:10:32.508688] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.392 [2024-07-24 22:10:32.508694] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.392 [2024-07-24 22:10:32.508699] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.392 [2024-07-24 22:10:32.508704] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9192c0) on tqpair=0x8adf00 00:22:53.392 [2024-07-24 22:10:32.512719] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.392 [2024-07-24 22:10:32.512727] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.392 [2024-07-24 22:10:32.512732] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8adf00) 00:22:53.392 [2024-07-24 22:10:32.512739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.392 [2024-07-24 22:10:32.512752] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9192c0, cid 3, qid 0 00:22:53.392 [2024-07-24 22:10:32.512929] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.392 [2024-07-24 22:10:32.512936] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.392 [2024-07-24 22:10:32.512940] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.392 [2024-07-24 22:10:32.512945] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9192c0) on tqpair=0x8adf00 00:22:53.392 [2024-07-24 22:10:32.512953] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:22:53.392 0% 00:22:53.392 Data Units Read: 0 00:22:53.392 Data Units Written: 0 00:22:53.392 Host Read Commands: 0 00:22:53.392 Host Write Commands: 0 00:22:53.392 Controller Busy Time: 0 minutes 00:22:53.392 Power Cycles: 0 00:22:53.392 Power On Hours: 0 hours 00:22:53.392 Unsafe Shutdowns: 0 00:22:53.392 Unrecoverable Media Errors: 0 00:22:53.392 Lifetime Error Log Entries: 0 00:22:53.392 Warning Temperature Time: 0 minutes 00:22:53.392 Critical Temperature Time: 0 minutes 00:22:53.392 00:22:53.392 Number of Queues 00:22:53.392 ================ 00:22:53.392 Number of I/O Submission Queues: 127 00:22:53.392 Number of I/O Completion Queues: 127 00:22:53.392 00:22:53.392 Active Namespaces 00:22:53.392 ================= 00:22:53.392 Namespace ID:1 00:22:53.392 Error Recovery Timeout: Unlimited 00:22:53.392 Command Set Identifier: NVM (00h) 00:22:53.392 Deallocate: Supported 00:22:53.392 Deallocated/Unwritten Error: Not Supported 00:22:53.392 Deallocated Read Value: Unknown 00:22:53.392 Deallocate in Write Zeroes: Not Supported 00:22:53.392 Deallocated Guard Field: 0xFFFF 00:22:53.392 Flush: Supported 00:22:53.392 Reservation: Supported 00:22:53.392 Namespace Sharing Capabilities: Multiple Controllers 00:22:53.392 Size (in LBAs): 131072 (0GiB) 00:22:53.392 Capacity (in LBAs): 131072 (0GiB) 00:22:53.392 Utilization (in LBAs): 131072 (0GiB) 00:22:53.392 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:53.392 EUI64: ABCDEF0123456789 00:22:53.392 UUID: acda57b8-a249-42b8-b840-dc16272b86ae 00:22:53.392 Thin Provisioning: Not Supported 00:22:53.392 Per-NS Atomic Units: Yes 00:22:53.392 Atomic Boundary Size (Normal): 0 00:22:53.392 Atomic Boundary Size (PFail): 0 00:22:53.392 Atomic Boundary Offset: 0 00:22:53.392 Maximum Single Source Range Length: 65535 00:22:53.392 Maximum Copy Length: 65535 00:22:53.392 Maximum Source Range Count: 1 00:22:53.392 NGUID/EUI64 Never Reused: No 00:22:53.392 Namespace Write Protected: No 00:22:53.392 Number of LBA Formats: 1 00:22:53.392 Current LBA Format: LBA Format #00 00:22:53.392 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:53.392 00:22:53.392 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:53.392 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:53.392 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.392 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:53.392 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.392 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:53.392 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:53.392 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:53.392 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:22:53.392 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:53.392 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:22:53.392 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:53.392 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:53.392 rmmod nvme_tcp 00:22:53.392 rmmod nvme_fabrics 00:22:53.392 rmmod nvme_keyring 00:22:53.652 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:53.652 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:22:53.652 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:22:53.652 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2770133 ']' 00:22:53.652 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2770133 00:22:53.652 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 2770133 ']' 00:22:53.652 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 2770133 00:22:53.652 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:22:53.652 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:53.652 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2770133 00:22:53.652 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:53.652 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:53.652 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2770133' 00:22:53.652 killing process with pid 2770133 00:22:53.652 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 2770133 00:22:53.652 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 2770133 00:22:53.911 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:53.911 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:53.911 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:53.911 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:53.912 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:53.912 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.912 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:53.912 22:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.822 22:10:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:55.822 00:22:55.822 real 0m10.624s 00:22:55.822 user 0m7.959s 00:22:55.822 sys 0m5.547s 00:22:55.822 22:10:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:55.822 22:10:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:55.822 ************************************ 00:22:55.822 END TEST nvmf_identify 00:22:55.822 ************************************ 00:22:55.822 22:10:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:55.822 22:10:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:55.822 22:10:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:55.822 22:10:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.822 ************************************ 00:22:55.822 START TEST nvmf_perf 00:22:55.822 ************************************ 00:22:55.822 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:56.081 * Looking for test storage... 00:22:56.081 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:22:56.082 22:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:02.663 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:02.663 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:23:02.663 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:02.663 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:02.663 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:02.663 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:02.663 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:02.663 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:23:02.663 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:02.663 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:23:02.663 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:23:02.663 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:23:02.663 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:23:02.663 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:23:02.663 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:23:02.663 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:02.663 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:02.663 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:02.663 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:02.663 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:02.663 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:02.663 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:02.663 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:02.663 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:02.663 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:02.663 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:02.664 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:02.664 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:02.664 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:02.664 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:02.664 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:02.664 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:02.664 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:02.664 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:02.664 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:02.664 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:02.664 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:02.664 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:02.664 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:02.664 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:02.664 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:02.664 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:02.664 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:02.664 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:02.664 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:02.664 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:02.664 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:02.664 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:02.664 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:02.664 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:02.664 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:02.664 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:02.664 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:02.664 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:02.664 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:02.664 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:02.664 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:02.664 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:02.664 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:02.664 Found net devices under 0000:af:00.0: cvl_0_0 00:23:02.664 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:02.664 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:02.664 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:02.664 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:02.664 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:02.664 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:02.665 Found net devices under 0000:af:00.1: cvl_0_1 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:02.665 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:02.665 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:23:02.665 00:23:02.665 --- 10.0.0.2 ping statistics --- 00:23:02.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:02.665 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:02.665 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:02.665 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.256 ms 00:23:02.665 00:23:02.665 --- 10.0.0.1 ping statistics --- 00:23:02.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:02.665 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2774079 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2774079 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 2774079 ']' 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:02.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:02.665 22:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:02.665 [2024-07-24 22:10:41.643947] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:23:02.665 [2024-07-24 22:10:41.643997] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:02.665 EAL: No free 2048 kB hugepages reported on node 1 00:23:02.665 [2024-07-24 22:10:41.718212] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:02.666 [2024-07-24 22:10:41.792011] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:02.666 [2024-07-24 22:10:41.792049] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:02.666 [2024-07-24 22:10:41.792058] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:02.666 [2024-07-24 22:10:41.792066] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:02.666 [2024-07-24 22:10:41.792089] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:02.666 [2024-07-24 22:10:41.792136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:02.666 [2024-07-24 22:10:41.792229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:02.666 [2024-07-24 22:10:41.792316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:02.666 [2024-07-24 22:10:41.792318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.293 22:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:03.293 22:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:23:03.293 22:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:03.293 22:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:03.293 22:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:03.293 22:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:03.552 22:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:03.552 22:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:06.838 22:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:06.838 22:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:06.838 22:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:23:06.838 22:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:06.838 22:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:06.838 22:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:23:06.838 22:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:06.838 22:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:06.838 22:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:07.097 [2024-07-24 22:10:46.089219] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:07.097 22:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:07.097 22:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:07.097 22:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:07.356 22:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:07.356 22:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:07.614 22:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:07.614 [2024-07-24 22:10:46.827931] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:07.873 22:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:07.873 22:10:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:23:07.873 22:10:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:23:07.873 22:10:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:07.873 22:10:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:23:09.250 Initializing NVMe Controllers 00:23:09.250 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:23:09.250 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:23:09.250 Initialization complete. Launching workers. 00:23:09.250 ======================================================== 00:23:09.250 Latency(us) 00:23:09.250 Device Information : IOPS MiB/s Average min max 00:23:09.250 PCIE (0000:d8:00.0) NSID 1 from core 0: 102238.08 399.37 312.64 34.30 7189.63 00:23:09.250 ======================================================== 00:23:09.250 Total : 102238.08 399.37 312.64 34.30 7189.63 00:23:09.250 00:23:09.250 22:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:09.250 EAL: No free 2048 kB hugepages reported on node 1 00:23:10.628 Initializing NVMe Controllers 00:23:10.628 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:10.628 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:10.628 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:10.628 Initialization complete. Launching workers. 00:23:10.628 ======================================================== 00:23:10.628 Latency(us) 00:23:10.628 Device Information : IOPS MiB/s Average min max 00:23:10.628 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 86.00 0.34 12089.82 227.52 45201.17 00:23:10.628 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 46.00 0.18 21837.50 7960.65 47887.26 00:23:10.628 ======================================================== 00:23:10.628 Total : 132.00 0.52 15486.74 227.52 47887.26 00:23:10.628 00:23:10.628 22:10:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:10.628 EAL: No free 2048 kB hugepages reported on node 1 00:23:12.008 Initializing NVMe Controllers 00:23:12.008 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:12.008 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:12.008 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:12.008 Initialization complete. Launching workers. 00:23:12.008 ======================================================== 00:23:12.008 Latency(us) 00:23:12.008 Device Information : IOPS MiB/s Average min max 00:23:12.008 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10360.61 40.47 3103.09 486.04 7756.61 00:23:12.008 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3853.85 15.05 8340.61 6712.44 22367.67 00:23:12.008 ======================================================== 00:23:12.008 Total : 14214.46 55.53 4523.10 486.04 22367.67 00:23:12.008 00:23:12.008 22:10:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:12.008 22:10:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:12.008 22:10:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:12.008 EAL: No free 2048 kB hugepages reported on node 1 00:23:14.543 Initializing NVMe Controllers 00:23:14.543 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:14.543 Controller IO queue size 128, less than required. 00:23:14.543 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:14.543 Controller IO queue size 128, less than required. 00:23:14.543 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:14.543 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:14.543 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:14.543 Initialization complete. Launching workers. 00:23:14.543 ======================================================== 00:23:14.543 Latency(us) 00:23:14.543 Device Information : IOPS MiB/s Average min max 00:23:14.543 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1008.00 252.00 131292.39 76448.48 215569.02 00:23:14.543 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 609.00 152.25 218045.09 79058.12 341317.64 00:23:14.543 ======================================================== 00:23:14.543 Total : 1617.00 404.25 163965.48 76448.48 341317.64 00:23:14.543 00:23:14.543 22:10:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:14.543 EAL: No free 2048 kB hugepages reported on node 1 00:23:14.802 No valid NVMe controllers or AIO or URING devices found 00:23:14.802 Initializing NVMe Controllers 00:23:14.802 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:14.802 Controller IO queue size 128, less than required. 00:23:14.802 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:14.802 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:14.802 Controller IO queue size 128, less than required. 00:23:14.802 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:14.802 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:14.802 WARNING: Some requested NVMe devices were skipped 00:23:14.802 22:10:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:14.802 EAL: No free 2048 kB hugepages reported on node 1 00:23:17.333 Initializing NVMe Controllers 00:23:17.333 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:17.333 Controller IO queue size 128, less than required. 00:23:17.333 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:17.333 Controller IO queue size 128, less than required. 00:23:17.333 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:17.333 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:17.333 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:17.333 Initialization complete. Launching workers. 00:23:17.333 00:23:17.333 ==================== 00:23:17.333 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:17.333 TCP transport: 00:23:17.333 polls: 40457 00:23:17.333 idle_polls: 14272 00:23:17.333 sock_completions: 26185 00:23:17.333 nvme_completions: 4023 00:23:17.333 submitted_requests: 6032 00:23:17.333 queued_requests: 1 00:23:17.333 00:23:17.333 ==================== 00:23:17.333 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:17.333 TCP transport: 00:23:17.333 polls: 44081 00:23:17.333 idle_polls: 16684 00:23:17.333 sock_completions: 27397 00:23:17.333 nvme_completions: 3869 00:23:17.333 submitted_requests: 5800 00:23:17.333 queued_requests: 1 00:23:17.333 ======================================================== 00:23:17.333 Latency(us) 00:23:17.333 Device Information : IOPS MiB/s Average min max 00:23:17.333 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1005.49 251.37 129505.22 59613.83 198762.09 00:23:17.334 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 966.99 241.75 136112.15 60730.79 194828.40 00:23:17.334 ======================================================== 00:23:17.334 Total : 1972.48 493.12 132744.21 59613.83 198762.09 00:23:17.334 00:23:17.592 22:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:17.592 22:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:17.592 22:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:17.592 22:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:17.592 22:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:17.592 22:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:17.592 22:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:23:17.592 22:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:17.592 22:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:23:17.592 22:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:17.592 22:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:17.592 rmmod nvme_tcp 00:23:17.851 rmmod nvme_fabrics 00:23:17.851 rmmod nvme_keyring 00:23:17.851 22:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:17.851 22:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:23:17.851 22:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:23:17.851 22:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2774079 ']' 00:23:17.851 22:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2774079 00:23:17.851 22:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 2774079 ']' 00:23:17.851 22:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 2774079 00:23:17.851 22:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:23:17.851 22:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:17.851 22:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2774079 00:23:17.851 22:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:17.851 22:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:17.851 22:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2774079' 00:23:17.851 killing process with pid 2774079 00:23:17.851 22:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 2774079 00:23:17.851 22:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 2774079 00:23:20.387 22:10:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:20.387 22:10:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:20.387 22:10:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:20.387 22:10:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:20.387 22:10:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:20.387 22:10:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:20.387 22:10:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:20.387 22:10:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.294 22:11:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:22.294 00:23:22.294 real 0m26.066s 00:23:22.294 user 1m8.659s 00:23:22.294 sys 0m8.419s 00:23:22.294 22:11:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:22.294 22:11:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:22.294 ************************************ 00:23:22.294 END TEST nvmf_perf 00:23:22.294 ************************************ 00:23:22.294 22:11:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:22.294 22:11:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:22.294 22:11:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:22.294 22:11:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.295 ************************************ 00:23:22.295 START TEST nvmf_fio_host 00:23:22.295 ************************************ 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:22.295 * Looking for test storage... 00:23:22.295 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:23:22.295 22:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:28.887 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:28.887 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:28.887 Found net devices under 0000:af:00.0: cvl_0_0 00:23:28.887 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:28.888 Found net devices under 0000:af:00.1: cvl_0_1 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:28.888 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:28.888 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:23:28.888 00:23:28.888 --- 10.0.0.2 ping statistics --- 00:23:28.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.888 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:28.888 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:28.888 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:23:28.888 00:23:28.888 --- 10.0.0.1 ping statistics --- 00:23:28.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.888 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2780722 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2780722 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 2780722 ']' 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:28.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:28.888 22:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.888 [2024-07-24 22:11:08.029041] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:23:28.888 [2024-07-24 22:11:08.029090] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:28.888 EAL: No free 2048 kB hugepages reported on node 1 00:23:29.148 [2024-07-24 22:11:08.101174] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:29.148 [2024-07-24 22:11:08.176365] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:29.148 [2024-07-24 22:11:08.176402] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:29.148 [2024-07-24 22:11:08.176411] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:29.148 [2024-07-24 22:11:08.176420] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:29.148 [2024-07-24 22:11:08.176428] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:29.148 [2024-07-24 22:11:08.176473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:29.148 [2024-07-24 22:11:08.176566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:29.148 [2024-07-24 22:11:08.176649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:29.148 [2024-07-24 22:11:08.176651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.715 22:11:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:29.716 22:11:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:23:29.716 22:11:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:29.975 [2024-07-24 22:11:08.996292] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:29.975 22:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:29.975 22:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:29.975 22:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.975 22:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:30.234 Malloc1 00:23:30.234 22:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:30.494 22:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:30.494 22:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:30.753 [2024-07-24 22:11:09.789520] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:30.753 22:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:31.013 22:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:23:31.013 22:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:31.013 22:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:31.013 22:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:31.013 22:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:31.013 22:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:31.013 22:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:31.013 22:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:31.013 22:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:31.013 22:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:31.013 22:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:31.013 22:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:31.013 22:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:31.013 22:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:31.013 22:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:31.013 22:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:31.013 22:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:31.013 22:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:31.013 22:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:31.013 22:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:31.013 22:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:31.013 22:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:31.013 22:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:31.272 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:31.272 fio-3.35 00:23:31.272 Starting 1 thread 00:23:31.272 EAL: No free 2048 kB hugepages reported on node 1 00:23:33.806 00:23:33.806 test: (groupid=0, jobs=1): err= 0: pid=2781153: Wed Jul 24 22:11:12 2024 00:23:33.806 read: IOPS=12.4k, BW=48.5MiB/s (50.9MB/s)(97.3MiB/2005msec) 00:23:33.806 slat (nsec): min=1540, max=253511, avg=1672.27, stdev=2204.51 00:23:33.806 clat (usec): min=3244, max=10292, avg=5708.30, stdev=420.48 00:23:33.806 lat (usec): min=3280, max=10294, avg=5709.97, stdev=420.51 00:23:33.806 clat percentiles (usec): 00:23:33.806 | 1.00th=[ 4686], 5.00th=[ 5080], 10.00th=[ 5211], 20.00th=[ 5407], 00:23:33.806 | 30.00th=[ 5538], 40.00th=[ 5604], 50.00th=[ 5735], 60.00th=[ 5800], 00:23:33.806 | 70.00th=[ 5932], 80.00th=[ 6063], 90.00th=[ 6194], 95.00th=[ 6325], 00:23:33.806 | 99.00th=[ 6718], 99.50th=[ 6915], 99.90th=[ 7635], 99.95th=[ 9503], 00:23:33.806 | 99.99th=[10028] 00:23:33.806 bw ( KiB/s): min=48552, max=50376, per=99.95%, avg=49668.00, stdev=784.94, samples=4 00:23:33.806 iops : min=12138, max=12594, avg=12417.00, stdev=196.23, samples=4 00:23:33.806 write: IOPS=12.4k, BW=48.5MiB/s (50.8MB/s)(97.2MiB/2005msec); 0 zone resets 00:23:33.806 slat (nsec): min=1589, max=230259, avg=1752.62, stdev=1624.06 00:23:33.806 clat (usec): min=2478, max=8809, avg=4562.41, stdev=349.55 00:23:33.806 lat (usec): min=2493, max=8811, avg=4564.17, stdev=349.54 00:23:33.806 clat percentiles (usec): 00:23:33.806 | 1.00th=[ 3720], 5.00th=[ 4015], 10.00th=[ 4146], 20.00th=[ 4293], 00:23:33.806 | 30.00th=[ 4424], 40.00th=[ 4490], 50.00th=[ 4555], 60.00th=[ 4621], 00:23:33.806 | 70.00th=[ 4752], 80.00th=[ 4817], 90.00th=[ 4948], 95.00th=[ 5080], 00:23:33.806 | 99.00th=[ 5342], 99.50th=[ 5407], 99.90th=[ 6718], 99.95th=[ 8160], 00:23:33.806 | 99.99th=[ 8848] 00:23:33.806 bw ( KiB/s): min=49240, max=50120, per=100.00%, avg=49638.00, stdev=362.53, samples=4 00:23:33.806 iops : min=12310, max=12530, avg=12409.50, stdev=90.63, samples=4 00:23:33.806 lat (msec) : 4=2.35%, 10=97.64%, 20=0.01% 00:23:33.806 cpu : usr=62.18%, sys=32.24%, ctx=77, majf=0, minf=5 00:23:33.806 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:33.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:33.806 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:33.806 issued rwts: total=24909,24871,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:33.806 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:33.806 00:23:33.806 Run status group 0 (all jobs): 00:23:33.806 READ: bw=48.5MiB/s (50.9MB/s), 48.5MiB/s-48.5MiB/s (50.9MB/s-50.9MB/s), io=97.3MiB (102MB), run=2005-2005msec 00:23:33.806 WRITE: bw=48.5MiB/s (50.8MB/s), 48.5MiB/s-48.5MiB/s (50.8MB/s-50.8MB/s), io=97.2MiB (102MB), run=2005-2005msec 00:23:33.806 22:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:33.806 22:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:33.806 22:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:33.806 22:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:33.806 22:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:33.806 22:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:33.806 22:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:33.806 22:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:33.806 22:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:33.806 22:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:33.806 22:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:33.806 22:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:33.806 22:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:33.806 22:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:33.806 22:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:33.806 22:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:33.806 22:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:33.806 22:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:33.806 22:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:33.807 22:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:33.807 22:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:33.807 22:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:33.807 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:33.807 fio-3.35 00:23:33.807 Starting 1 thread 00:23:34.065 EAL: No free 2048 kB hugepages reported on node 1 00:23:36.601 00:23:36.601 test: (groupid=0, jobs=1): err= 0: pid=2781808: Wed Jul 24 22:11:15 2024 00:23:36.601 read: IOPS=10.7k, BW=168MiB/s (176MB/s)(336MiB/2006msec) 00:23:36.601 slat (nsec): min=2441, max=82004, avg=2693.53, stdev=1152.70 00:23:36.601 clat (usec): min=1277, max=14776, avg=7101.42, stdev=1949.32 00:23:36.601 lat (usec): min=1280, max=14779, avg=7104.11, stdev=1949.47 00:23:36.601 clat percentiles (usec): 00:23:36.601 | 1.00th=[ 3523], 5.00th=[ 4359], 10.00th=[ 4752], 20.00th=[ 5407], 00:23:36.601 | 30.00th=[ 5932], 40.00th=[ 6456], 50.00th=[ 6980], 60.00th=[ 7504], 00:23:36.601 | 70.00th=[ 7963], 80.00th=[ 8455], 90.00th=[ 9503], 95.00th=[10814], 00:23:36.601 | 99.00th=[12780], 99.50th=[13173], 99.90th=[13698], 99.95th=[13960], 00:23:36.601 | 99.99th=[14222] 00:23:36.601 bw ( KiB/s): min=82304, max=95360, per=50.46%, avg=86640.00, stdev=5917.23, samples=4 00:23:36.601 iops : min= 5144, max= 5960, avg=5415.00, stdev=369.83, samples=4 00:23:36.601 write: IOPS=6275, BW=98.1MiB/s (103MB/s)(177MiB/1808msec); 0 zone resets 00:23:36.601 slat (usec): min=28, max=243, avg=30.02, stdev= 5.82 00:23:36.601 clat (usec): min=3143, max=13615, avg=8277.62, stdev=1466.39 00:23:36.601 lat (usec): min=3172, max=13644, avg=8307.64, stdev=1467.63 00:23:36.601 clat percentiles (usec): 00:23:36.601 | 1.00th=[ 5669], 5.00th=[ 6259], 10.00th=[ 6587], 20.00th=[ 7046], 00:23:36.601 | 30.00th=[ 7373], 40.00th=[ 7767], 50.00th=[ 8094], 60.00th=[ 8455], 00:23:36.601 | 70.00th=[ 8848], 80.00th=[ 9372], 90.00th=[10290], 95.00th=[11207], 00:23:36.601 | 99.00th=[12387], 99.50th=[12649], 99.90th=[12911], 99.95th=[13042], 00:23:36.601 | 99.99th=[13173] 00:23:36.601 bw ( KiB/s): min=85920, max=99200, per=89.83%, avg=90200.00, stdev=6111.83, samples=4 00:23:36.601 iops : min= 5370, max= 6200, avg=5637.50, stdev=381.99, samples=4 00:23:36.601 lat (msec) : 2=0.04%, 4=1.74%, 10=88.86%, 20=9.36% 00:23:36.601 cpu : usr=81.90%, sys=16.11%, ctx=28, majf=0, minf=2 00:23:36.601 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:23:36.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:36.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:36.601 issued rwts: total=21528,11346,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:36.601 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:36.601 00:23:36.601 Run status group 0 (all jobs): 00:23:36.601 READ: bw=168MiB/s (176MB/s), 168MiB/s-168MiB/s (176MB/s-176MB/s), io=336MiB (353MB), run=2006-2006msec 00:23:36.601 WRITE: bw=98.1MiB/s (103MB/s), 98.1MiB/s-98.1MiB/s (103MB/s-103MB/s), io=177MiB (186MB), run=1808-1808msec 00:23:36.601 22:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:36.601 22:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:23:36.601 22:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:36.601 22:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:23:36.601 22:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:23:36.601 22:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:36.601 22:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:23:36.602 22:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:36.602 22:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:23:36.602 22:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:36.602 22:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:36.602 rmmod nvme_tcp 00:23:36.602 rmmod nvme_fabrics 00:23:36.602 rmmod nvme_keyring 00:23:36.602 22:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:36.602 22:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:23:36.602 22:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:23:36.602 22:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2780722 ']' 00:23:36.602 22:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2780722 00:23:36.602 22:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 2780722 ']' 00:23:36.602 22:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 2780722 00:23:36.602 22:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:23:36.602 22:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:36.602 22:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2780722 00:23:36.602 22:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:36.602 22:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:36.602 22:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2780722' 00:23:36.602 killing process with pid 2780722 00:23:36.602 22:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 2780722 00:23:36.602 22:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 2780722 00:23:36.862 22:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:36.862 22:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:36.862 22:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:36.862 22:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:36.862 22:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:36.862 22:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.862 22:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:36.862 22:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.398 22:11:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:39.398 00:23:39.399 real 0m16.899s 00:23:39.399 user 0m54.152s 00:23:39.399 sys 0m7.660s 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.399 ************************************ 00:23:39.399 END TEST nvmf_fio_host 00:23:39.399 ************************************ 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.399 ************************************ 00:23:39.399 START TEST nvmf_failover 00:23:39.399 ************************************ 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:39.399 * Looking for test storage... 00:23:39.399 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:23:39.399 22:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:45.972 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:45.972 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:23:45.972 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:45.972 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:45.972 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:45.972 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:45.972 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:45.972 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:23:45.972 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:45.972 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:23:45.972 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:23:45.972 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:23:45.972 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:23:45.972 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:23:45.972 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:23:45.972 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:45.972 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:45.972 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:45.972 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:45.972 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:45.972 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:45.972 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:45.973 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:45.973 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:45.973 Found net devices under 0000:af:00.0: cvl_0_0 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:45.973 Found net devices under 0000:af:00.1: cvl_0_1 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:45.973 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:45.973 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:23:45.973 00:23:45.973 --- 10.0.0.2 ping statistics --- 00:23:45.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.973 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:23:45.973 22:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:45.973 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:45.973 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:23:45.973 00:23:45.973 --- 10.0.0.1 ping statistics --- 00:23:45.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.973 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:23:45.973 22:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:45.973 22:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:23:45.973 22:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:45.973 22:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:45.973 22:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:45.973 22:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:45.973 22:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:45.973 22:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:45.973 22:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:45.973 22:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:45.973 22:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:45.973 22:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:45.973 22:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:45.973 22:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2785869 00:23:45.973 22:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2785869 00:23:45.973 22:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2785869 ']' 00:23:45.973 22:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:45.973 22:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:45.973 22:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:45.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:45.973 22:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:45.973 22:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:45.973 22:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:45.973 [2024-07-24 22:11:25.087415] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:23:45.973 [2024-07-24 22:11:25.087468] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:45.973 EAL: No free 2048 kB hugepages reported on node 1 00:23:45.973 [2024-07-24 22:11:25.161244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:46.233 [2024-07-24 22:11:25.233619] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:46.233 [2024-07-24 22:11:25.233658] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:46.233 [2024-07-24 22:11:25.233667] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:46.233 [2024-07-24 22:11:25.233676] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:46.233 [2024-07-24 22:11:25.233683] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:46.233 [2024-07-24 22:11:25.233733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:46.233 [2024-07-24 22:11:25.233819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:46.233 [2024-07-24 22:11:25.233821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:46.801 22:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:46.801 22:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:23:46.801 22:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:46.801 22:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:46.801 22:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:46.801 22:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:46.801 22:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:47.060 [2024-07-24 22:11:26.093058] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:47.060 22:11:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:47.320 Malloc0 00:23:47.320 22:11:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:47.320 22:11:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:47.579 22:11:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:47.838 [2024-07-24 22:11:26.857432] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:47.838 22:11:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:47.838 [2024-07-24 22:11:27.033937] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:48.137 22:11:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:48.137 [2024-07-24 22:11:27.218533] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:48.137 22:11:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:48.137 22:11:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2786293 00:23:48.137 22:11:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:48.137 22:11:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2786293 /var/tmp/bdevperf.sock 00:23:48.137 22:11:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2786293 ']' 00:23:48.137 22:11:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:48.137 22:11:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:48.137 22:11:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:48.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:48.137 22:11:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:48.137 22:11:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:49.075 22:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:49.075 22:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:23:49.075 22:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:49.334 NVMe0n1 00:23:49.334 22:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:49.903 00:23:49.903 22:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:49.903 22:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2786555 00:23:49.903 22:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:50.841 22:11:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:50.841 [2024-07-24 22:11:30.021753] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d861e0 is same with the state(5) to be set 00:23:50.841 [2024-07-24 22:11:30.021819] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d861e0 is same with the state(5) to be set 00:23:50.841 [2024-07-24 22:11:30.021830] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d861e0 is same with the state(5) to be set 00:23:50.841 [2024-07-24 22:11:30.021840] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d861e0 is same with the state(5) to be set 00:23:50.841 [2024-07-24 22:11:30.021850] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d861e0 is same with the state(5) to be set 00:23:50.841 [2024-07-24 22:11:30.021875] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d861e0 is same with the state(5) to be set 00:23:50.841 [2024-07-24 22:11:30.021885] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d861e0 is same with the state(5) to be set 00:23:50.841 [2024-07-24 22:11:30.021894] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d861e0 is same with the state(5) to be set 00:23:50.841 [2024-07-24 22:11:30.021904] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d861e0 is same with the state(5) to be set 00:23:50.841 [2024-07-24 22:11:30.021913] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d861e0 is same with the state(5) to be set 00:23:50.841 [2024-07-24 22:11:30.021922] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d861e0 is same with the state(5) to be set 00:23:50.841 [2024-07-24 22:11:30.021932] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d861e0 is same with the state(5) to be set 00:23:50.841 [2024-07-24 22:11:30.021941] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d861e0 is same with the state(5) to be set 00:23:50.841 [2024-07-24 22:11:30.021951] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d861e0 is same with the state(5) to be set 00:23:50.841 [2024-07-24 22:11:30.021960] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d861e0 is same with the state(5) to be set 00:23:50.841 [2024-07-24 22:11:30.021970] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d861e0 is same with the state(5) to be set 00:23:50.841 [2024-07-24 22:11:30.021979] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d861e0 is same with the state(5) to be set 00:23:50.841 [2024-07-24 22:11:30.021988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d861e0 is same with the state(5) to be set 00:23:50.841 [2024-07-24 22:11:30.021998] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d861e0 is same with the state(5) to be set 00:23:50.841 [2024-07-24 22:11:30.022007] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d861e0 is same with the state(5) to be set 00:23:50.841 [2024-07-24 22:11:30.022017] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d861e0 is same with the state(5) to be set 00:23:50.841 [2024-07-24 22:11:30.022027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d861e0 is same with the state(5) to be set 00:23:50.841 [2024-07-24 22:11:30.022036] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d861e0 is same with the state(5) to be set 00:23:50.841 22:11:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:54.132 22:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:54.132 00:23:54.392 22:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:54.392 [2024-07-24 22:11:33.531487] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86f60 is same with the state(5) to be set 00:23:54.392 [2024-07-24 22:11:33.531546] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86f60 is same with the state(5) to be set 00:23:54.392 [2024-07-24 22:11:33.531556] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86f60 is same with the state(5) to be set 00:23:54.392 [2024-07-24 22:11:33.531565] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86f60 is same with the state(5) to be set 00:23:54.392 [2024-07-24 22:11:33.531574] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86f60 is same with the state(5) to be set 00:23:54.392 [2024-07-24 22:11:33.531583] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86f60 is same with the state(5) to be set 00:23:54.392 [2024-07-24 22:11:33.531591] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86f60 is same with the state(5) to be set 00:23:54.392 [2024-07-24 22:11:33.531600] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86f60 is same with the state(5) to be set 00:23:54.392 [2024-07-24 22:11:33.531608] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86f60 is same with the state(5) to be set 00:23:54.392 [2024-07-24 22:11:33.531617] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86f60 is same with the state(5) to be set 00:23:54.392 [2024-07-24 22:11:33.531625] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86f60 is same with the state(5) to be set 00:23:54.392 [2024-07-24 22:11:33.531634] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86f60 is same with the state(5) to be set 00:23:54.392 [2024-07-24 22:11:33.531642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86f60 is same with the state(5) to be set 00:23:54.392 [2024-07-24 22:11:33.531651] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86f60 is same with the state(5) to be set 00:23:54.392 [2024-07-24 22:11:33.531660] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86f60 is same with the state(5) to be set 00:23:54.392 22:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:57.685 22:11:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:57.685 [2024-07-24 22:11:36.729567] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:57.685 22:11:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:58.622 22:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:58.882 [2024-07-24 22:11:37.925993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87d50 is same with the state(5) to be set 00:23:58.882 [2024-07-24 22:11:37.926046] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87d50 is same with the state(5) to be set 00:23:58.882 [2024-07-24 22:11:37.926057] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87d50 is same with the state(5) to be set 00:23:58.882 [2024-07-24 22:11:37.926067] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87d50 is same with the state(5) to be set 00:23:58.882 [2024-07-24 22:11:37.926085] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87d50 is same with the state(5) to be set 00:23:58.882 [2024-07-24 22:11:37.926094] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87d50 is same with the state(5) to be set 00:23:58.882 22:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2786555 00:24:05.465 0 00:24:05.465 22:11:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2786293 00:24:05.465 22:11:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2786293 ']' 00:24:05.465 22:11:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2786293 00:24:05.465 22:11:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:24:05.465 22:11:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:05.465 22:11:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2786293 00:24:05.465 22:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:05.465 22:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:05.465 22:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2786293' 00:24:05.465 killing process with pid 2786293 00:24:05.465 22:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2786293 00:24:05.465 22:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2786293 00:24:05.465 22:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:05.465 [2024-07-24 22:11:27.283275] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:24:05.465 [2024-07-24 22:11:27.283333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2786293 ] 00:24:05.465 EAL: No free 2048 kB hugepages reported on node 1 00:24:05.465 [2024-07-24 22:11:27.352878] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.465 [2024-07-24 22:11:27.423314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:05.465 Running I/O for 15 seconds... 00:24:05.465 [2024-07-24 22:11:30.022899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:106600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.466 [2024-07-24 22:11:30.022942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.466 [2024-07-24 22:11:30.022960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:106608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.466 [2024-07-24 22:11:30.022971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.466 [2024-07-24 22:11:30.022984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:106616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.466 [2024-07-24 22:11:30.022994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.466 [2024-07-24 22:11:30.023006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:106624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.466 [2024-07-24 22:11:30.023016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.466 [2024-07-24 22:11:30.023028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:106632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.466 [2024-07-24 22:11:30.023037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.466 [2024-07-24 22:11:30.023049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:106640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.466 [2024-07-24 22:11:30.023059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.466 [2024-07-24 22:11:30.023070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:106648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.466 [2024-07-24 22:11:30.023080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.466 [2024-07-24 22:11:30.023092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:106976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.466 [2024-07-24 22:11:30.023102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.466 [2024-07-24 22:11:30.023114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.466 [2024-07-24 22:11:30.023123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.466 [2024-07-24 22:11:30.023135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:106992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.466 [2024-07-24 22:11:30.023145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.466 [2024-07-24 22:11:30.023156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:107000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.466 [2024-07-24 22:11:30.023166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.466 [2024-07-24 22:11:30.023183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.466 [2024-07-24 22:11:30.023193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.466 [2024-07-24 22:11:30.023206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:107016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.466 [2024-07-24 22:11:30.023216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.466 [2024-07-24 22:11:30.023227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:107024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.466 [2024-07-24 22:11:30.023238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.466 [2024-07-24 22:11:30.023249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:107032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.466 [2024-07-24 22:11:30.023259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.466 [2024-07-24 22:11:30.023270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:107040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.466 [2024-07-24 22:11:30.023280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.466 [2024-07-24 22:11:30.023292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:107048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.466 [2024-07-24 22:11:30.023303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.466 [2024-07-24 22:11:30.023314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:107056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.466 [2024-07-24 22:11:30.023324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.466 [2024-07-24 22:11:30.023335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:107064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.466 [2024-07-24 22:11:30.023345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.466 [2024-07-24 22:11:30.023356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:107072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.466 [2024-07-24 22:11:30.023366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.466 [2024-07-24 22:11:30.023377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:107080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.466 [2024-07-24 22:11:30.023387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.466 [2024-07-24 22:11:30.023398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:107088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.466 [2024-07-24 22:11:30.023408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.466 [2024-07-24 22:11:30.023421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:107096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.466 [2024-07-24 22:11:30.023430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.466 [2024-07-24 22:11:30.023442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:107104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.466 [2024-07-24 22:11:30.023453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.466 [2024-07-24 22:11:30.023464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:107112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.466 [2024-07-24 22:11:30.023474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.466 [2024-07-24 22:11:30.023485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:107120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.466 [2024-07-24 22:11:30.023495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.466 [2024-07-24 22:11:30.023506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:107128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.466 [2024-07-24 22:11:30.023516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.466 [2024-07-24 22:11:30.023527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:107136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.466 [2024-07-24 22:11:30.023537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.466 [2024-07-24 22:11:30.023548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.466 [2024-07-24 22:11:30.023558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.466 [2024-07-24 22:11:30.023569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:107152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.466 [2024-07-24 22:11:30.023579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.466 [2024-07-24 22:11:30.023590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:107160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.466 [2024-07-24 22:11:30.023600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.466 [2024-07-24 22:11:30.023611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:106656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.466 [2024-07-24 22:11:30.023621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.466 [2024-07-24 22:11:30.023632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:106664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.466 [2024-07-24 22:11:30.023643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.466 [2024-07-24 22:11:30.023655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:106672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.466 [2024-07-24 22:11:30.023664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.466 [2024-07-24 22:11:30.023676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:106680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.466 [2024-07-24 22:11:30.023685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.466 [2024-07-24 22:11:30.023697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:106688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.466 [2024-07-24 22:11:30.023706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.466 [2024-07-24 22:11:30.023727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:106696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.466 [2024-07-24 22:11:30.023738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.466 [2024-07-24 22:11:30.023749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:106704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.466 [2024-07-24 22:11:30.023759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.466 [2024-07-24 22:11:30.023771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:106712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.466 [2024-07-24 22:11:30.023780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.466 [2024-07-24 22:11:30.023792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:107168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.466 [2024-07-24 22:11:30.023801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.467 [2024-07-24 22:11:30.023813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:107176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.467 [2024-07-24 22:11:30.023823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.467 [2024-07-24 22:11:30.023834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:107184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.467 [2024-07-24 22:11:30.023844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.467 [2024-07-24 22:11:30.023855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:107192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.467 [2024-07-24 22:11:30.023864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.467 [2024-07-24 22:11:30.023876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:107200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.467 [2024-07-24 22:11:30.023885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.467 [2024-07-24 22:11:30.023897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:107208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.467 [2024-07-24 22:11:30.023906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.467 [2024-07-24 22:11:30.023917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:107216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.467 [2024-07-24 22:11:30.023927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.467 [2024-07-24 22:11:30.023938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:107224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.467 [2024-07-24 22:11:30.023948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.467 [2024-07-24 22:11:30.023959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.467 [2024-07-24 22:11:30.023969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.467 [2024-07-24 22:11:30.023981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:106728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.467 [2024-07-24 22:11:30.023993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.467 [2024-07-24 22:11:30.024015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:106736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.467 [2024-07-24 22:11:30.024025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.467 [2024-07-24 22:11:30.024036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:106744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.467 [2024-07-24 22:11:30.024046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.467 [2024-07-24 22:11:30.024057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:106752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.467 [2024-07-24 22:11:30.024067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.467 [2024-07-24 22:11:30.024078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:106760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.467 [2024-07-24 22:11:30.024087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.467 [2024-07-24 22:11:30.024098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:106768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.467 [2024-07-24 22:11:30.024108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.467 [2024-07-24 22:11:30.024119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:106776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.467 [2024-07-24 22:11:30.024141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.467 [2024-07-24 22:11:30.024151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.467 [2024-07-24 22:11:30.024161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.467 [2024-07-24 22:11:30.024171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:106792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.467 [2024-07-24 22:11:30.024180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.467 [2024-07-24 22:11:30.024191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:106800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.467 [2024-07-24 22:11:30.024200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.467 [2024-07-24 22:11:30.024210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:106808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.467 [2024-07-24 22:11:30.024219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.467 [2024-07-24 22:11:30.024230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:106816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.467 [2024-07-24 22:11:30.024239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.467 [2024-07-24 22:11:30.024249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:106824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.467 [2024-07-24 22:11:30.024259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.467 [2024-07-24 22:11:30.024271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:106832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.467 [2024-07-24 22:11:30.024280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.467 [2024-07-24 22:11:30.024291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:106840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.467 [2024-07-24 22:11:30.024300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.467 [2024-07-24 22:11:30.024310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:106848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.467 [2024-07-24 22:11:30.024320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.467 [2024-07-24 22:11:30.024330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:106856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.467 [2024-07-24 22:11:30.024340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.467 [2024-07-24 22:11:30.024350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:106864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.467 [2024-07-24 22:11:30.024359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.467 [2024-07-24 22:11:30.024370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:106872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.467 [2024-07-24 22:11:30.024379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.467 [2024-07-24 22:11:30.024389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.467 [2024-07-24 22:11:30.024398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.467 [2024-07-24 22:11:30.024408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:106888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.467 [2024-07-24 22:11:30.024418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.467 [2024-07-24 22:11:30.024428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:106896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.467 [2024-07-24 22:11:30.024437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.467 [2024-07-24 22:11:30.024448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:106904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.467 [2024-07-24 22:11:30.024456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.467 [2024-07-24 22:11:30.024467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:107232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.467 [2024-07-24 22:11:30.024476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.467 [2024-07-24 22:11:30.024486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:107240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.467 [2024-07-24 22:11:30.024495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.467 [2024-07-24 22:11:30.024506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:107248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.467 [2024-07-24 22:11:30.024516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.467 [2024-07-24 22:11:30.024526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:107256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.467 [2024-07-24 22:11:30.024535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.467 [2024-07-24 22:11:30.024546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:107264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.467 [2024-07-24 22:11:30.024555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.467 [2024-07-24 22:11:30.024565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:107272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.467 [2024-07-24 22:11:30.024575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.467 [2024-07-24 22:11:30.024585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:107280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.467 [2024-07-24 22:11:30.024594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.467 [2024-07-24 22:11:30.024604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:107288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.467 [2024-07-24 22:11:30.024613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.467 [2024-07-24 22:11:30.024623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:107296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.468 [2024-07-24 22:11:30.024632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.468 [2024-07-24 22:11:30.024643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:107304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.468 [2024-07-24 22:11:30.024652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.468 [2024-07-24 22:11:30.024663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:107312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.468 [2024-07-24 22:11:30.024672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.468 [2024-07-24 22:11:30.024682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:107320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.468 [2024-07-24 22:11:30.024691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.468 [2024-07-24 22:11:30.024702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:107328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.468 [2024-07-24 22:11:30.024711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.468 [2024-07-24 22:11:30.024725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:107336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.468 [2024-07-24 22:11:30.024734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.468 [2024-07-24 22:11:30.024744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:107344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.468 [2024-07-24 22:11:30.024753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.468 [2024-07-24 22:11:30.024764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:107352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.468 [2024-07-24 22:11:30.024775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.468 [2024-07-24 22:11:30.024785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:107360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.468 [2024-07-24 22:11:30.024802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.468 [2024-07-24 22:11:30.024812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:107368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.468 [2024-07-24 22:11:30.024822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.468 [2024-07-24 22:11:30.024832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:107376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.468 [2024-07-24 22:11:30.024841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.468 [2024-07-24 22:11:30.024851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:107384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.468 [2024-07-24 22:11:30.024860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.468 [2024-07-24 22:11:30.024871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.468 [2024-07-24 22:11:30.024881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.468 [2024-07-24 22:11:30.024891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:107400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.468 [2024-07-24 22:11:30.024900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.468 [2024-07-24 22:11:30.024910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:107408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.468 [2024-07-24 22:11:30.024919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.468 [2024-07-24 22:11:30.024929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:107416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.468 [2024-07-24 22:11:30.024939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.468 [2024-07-24 22:11:30.024949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:107424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.468 [2024-07-24 22:11:30.024958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.468 [2024-07-24 22:11:30.024968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.468 [2024-07-24 22:11:30.024978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.468 [2024-07-24 22:11:30.024989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:107440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.468 [2024-07-24 22:11:30.024999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.468 [2024-07-24 22:11:30.025009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:107448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.468 [2024-07-24 22:11:30.025018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.468 [2024-07-24 22:11:30.025030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:107456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.468 [2024-07-24 22:11:30.025039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.468 [2024-07-24 22:11:30.025049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:107464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.468 [2024-07-24 22:11:30.025058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.468 [2024-07-24 22:11:30.025069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:107472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.468 [2024-07-24 22:11:30.025078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.468 [2024-07-24 22:11:30.025088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:107480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.468 [2024-07-24 22:11:30.025097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.468 [2024-07-24 22:11:30.025108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:106912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.468 [2024-07-24 22:11:30.025118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.468 [2024-07-24 22:11:30.025128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:106920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.468 [2024-07-24 22:11:30.025137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.468 [2024-07-24 22:11:30.025148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:106928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.468 [2024-07-24 22:11:30.025157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.468 [2024-07-24 22:11:30.025167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:106936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.468 [2024-07-24 22:11:30.025176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.468 [2024-07-24 22:11:30.025187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:106944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.468 [2024-07-24 22:11:30.025195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.468 [2024-07-24 22:11:30.025206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:106952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.468 [2024-07-24 22:11:30.025215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.468 [2024-07-24 22:11:30.025225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:106960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.468 [2024-07-24 22:11:30.025235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.468 [2024-07-24 22:11:30.025245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:106968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.468 [2024-07-24 22:11:30.025254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.468 [2024-07-24 22:11:30.025264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:107488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.468 [2024-07-24 22:11:30.025274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.468 [2024-07-24 22:11:30.025285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:107496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.468 [2024-07-24 22:11:30.025295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.468 [2024-07-24 22:11:30.025306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:107504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.468 [2024-07-24 22:11:30.025315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.468 [2024-07-24 22:11:30.025325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:107512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.468 [2024-07-24 22:11:30.025334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.468 [2024-07-24 22:11:30.025344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:107520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.468 [2024-07-24 22:11:30.025354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.468 [2024-07-24 22:11:30.025364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:107528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.468 [2024-07-24 22:11:30.025373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.468 [2024-07-24 22:11:30.025383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:107536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.469 [2024-07-24 22:11:30.025392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.469 [2024-07-24 22:11:30.025402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:107544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.469 [2024-07-24 22:11:30.025411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.469 [2024-07-24 22:11:30.025422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:107552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.469 [2024-07-24 22:11:30.025432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.469 [2024-07-24 22:11:30.025443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:107560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.469 [2024-07-24 22:11:30.025451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.469 [2024-07-24 22:11:30.025462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:107568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.469 [2024-07-24 22:11:30.025471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.469 [2024-07-24 22:11:30.025481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:107576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.469 [2024-07-24 22:11:30.025490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.469 [2024-07-24 22:11:30.025500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:107584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.469 [2024-07-24 22:11:30.025509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.469 [2024-07-24 22:11:30.025522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:107592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.469 [2024-07-24 22:11:30.025531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.469 [2024-07-24 22:11:30.025542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:107600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.469 [2024-07-24 22:11:30.025550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.469 [2024-07-24 22:11:30.025561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:107608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.469 [2024-07-24 22:11:30.025570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.469 [2024-07-24 22:11:30.025593] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.469 [2024-07-24 22:11:30.025601] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.469 [2024-07-24 22:11:30.025609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107616 len:8 PRP1 0x0 PRP2 0x0 00:24:05.469 [2024-07-24 22:11:30.025619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.469 [2024-07-24 22:11:30.025663] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2502a20 was disconnected and freed. reset controller. 00:24:05.469 [2024-07-24 22:11:30.025675] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:05.469 [2024-07-24 22:11:30.025697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.469 [2024-07-24 22:11:30.025707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.469 [2024-07-24 22:11:30.025719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.469 [2024-07-24 22:11:30.025729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.469 [2024-07-24 22:11:30.025739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.469 [2024-07-24 22:11:30.025748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.469 [2024-07-24 22:11:30.025757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.469 [2024-07-24 22:11:30.025767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.469 [2024-07-24 22:11:30.025776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:05.469 [2024-07-24 22:11:30.028469] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:05.469 [2024-07-24 22:11:30.028500] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x250f590 (9): Bad file descriptor 00:24:05.469 [2024-07-24 22:11:30.095638] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:05.469 [2024-07-24 22:11:33.532046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.469 [2024-07-24 22:11:33.532083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.469 [2024-07-24 22:11:33.532100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:77760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.469 [2024-07-24 22:11:33.532115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.469 [2024-07-24 22:11:33.532126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:77768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.469 [2024-07-24 22:11:33.532136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.469 [2024-07-24 22:11:33.532146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:77776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.469 [2024-07-24 22:11:33.532156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.469 [2024-07-24 22:11:33.532166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:77784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.469 [2024-07-24 22:11:33.532175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.469 [2024-07-24 22:11:33.532185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:77792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.469 [2024-07-24 22:11:33.532194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.469 [2024-07-24 22:11:33.532204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:77800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.469 [2024-07-24 22:11:33.532214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.469 [2024-07-24 22:11:33.532224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:77808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.469 [2024-07-24 22:11:33.532233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.469 [2024-07-24 22:11:33.532244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.469 [2024-07-24 22:11:33.532253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.469 [2024-07-24 22:11:33.532263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:77824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.469 [2024-07-24 22:11:33.532272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.469 [2024-07-24 22:11:33.532283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:77832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.469 [2024-07-24 22:11:33.532292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.469 [2024-07-24 22:11:33.532302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.469 [2024-07-24 22:11:33.532311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.469 [2024-07-24 22:11:33.532322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:77848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.469 [2024-07-24 22:11:33.532331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.469 [2024-07-24 22:11:33.532341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:77856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.469 [2024-07-24 22:11:33.532350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.469 [2024-07-24 22:11:33.532362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:77864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.469 [2024-07-24 22:11:33.532371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.469 [2024-07-24 22:11:33.532381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:77872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.469 [2024-07-24 22:11:33.532390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.469 [2024-07-24 22:11:33.532400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.469 [2024-07-24 22:11:33.532410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.469 [2024-07-24 22:11:33.532420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:77888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.469 [2024-07-24 22:11:33.532429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.469 [2024-07-24 22:11:33.532439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:77896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.469 [2024-07-24 22:11:33.532449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.469 [2024-07-24 22:11:33.532459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:77904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.469 [2024-07-24 22:11:33.532468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.469 [2024-07-24 22:11:33.532478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:77912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.469 [2024-07-24 22:11:33.532487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.469 [2024-07-24 22:11:33.532497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:77920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.470 [2024-07-24 22:11:33.532507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.470 [2024-07-24 22:11:33.532517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:77928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.470 [2024-07-24 22:11:33.532526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.470 [2024-07-24 22:11:33.532537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:77936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.470 [2024-07-24 22:11:33.532546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.470 [2024-07-24 22:11:33.532556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.470 [2024-07-24 22:11:33.532565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.470 [2024-07-24 22:11:33.532575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.470 [2024-07-24 22:11:33.532584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.470 [2024-07-24 22:11:33.532595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.470 [2024-07-24 22:11:33.532604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.470 [2024-07-24 22:11:33.532615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:77968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.470 [2024-07-24 22:11:33.532624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.470 [2024-07-24 22:11:33.532635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:77976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.470 [2024-07-24 22:11:33.532644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.470 [2024-07-24 22:11:33.532654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:77984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.470 [2024-07-24 22:11:33.532666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.470 [2024-07-24 22:11:33.532677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.470 [2024-07-24 22:11:33.532685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.470 [2024-07-24 22:11:33.532696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:78000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.470 [2024-07-24 22:11:33.532705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.470 [2024-07-24 22:11:33.532720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:77368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.470 [2024-07-24 22:11:33.532730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.470 [2024-07-24 22:11:33.532741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:77376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.470 [2024-07-24 22:11:33.532750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.470 [2024-07-24 22:11:33.532760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.470 [2024-07-24 22:11:33.532769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.470 [2024-07-24 22:11:33.532780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:77392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.470 [2024-07-24 22:11:33.532789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.470 [2024-07-24 22:11:33.532799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:77400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.470 [2024-07-24 22:11:33.532808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.470 [2024-07-24 22:11:33.532819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:77408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.470 [2024-07-24 22:11:33.532828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.470 [2024-07-24 22:11:33.532838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:77416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.471 [2024-07-24 22:11:33.532847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.471 [2024-07-24 22:11:33.532857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.471 [2024-07-24 22:11:33.532871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.471 [2024-07-24 22:11:33.532881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.471 [2024-07-24 22:11:33.532890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.471 [2024-07-24 22:11:33.532900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:78016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.471 [2024-07-24 22:11:33.532909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.471 [2024-07-24 22:11:33.532920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.471 [2024-07-24 22:11:33.532929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.471 [2024-07-24 22:11:33.532939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.471 [2024-07-24 22:11:33.532948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.471 [2024-07-24 22:11:33.532958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:78040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.471 [2024-07-24 22:11:33.532967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.471 [2024-07-24 22:11:33.532978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.471 [2024-07-24 22:11:33.532988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.471 [2024-07-24 22:11:33.532998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:78056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.471 [2024-07-24 22:11:33.533007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.471 [2024-07-24 22:11:33.533018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.471 [2024-07-24 22:11:33.533027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.471 [2024-07-24 22:11:33.533037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:77432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.471 [2024-07-24 22:11:33.533047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.471 [2024-07-24 22:11:33.533057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:77440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.471 [2024-07-24 22:11:33.533066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.471 [2024-07-24 22:11:33.533077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:77448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.471 [2024-07-24 22:11:33.533086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.471 [2024-07-24 22:11:33.533096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:77456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.471 [2024-07-24 22:11:33.533105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.471 [2024-07-24 22:11:33.533117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:77464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.471 [2024-07-24 22:11:33.533126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.471 [2024-07-24 22:11:33.533136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:77472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.471 [2024-07-24 22:11:33.533146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.471 [2024-07-24 22:11:33.533156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:77480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.471 [2024-07-24 22:11:33.533165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.471 [2024-07-24 22:11:33.533176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:77488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.471 [2024-07-24 22:11:33.533185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.471 [2024-07-24 22:11:33.533196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:77496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.471 [2024-07-24 22:11:33.533205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.471 [2024-07-24 22:11:33.533215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:77504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.471 [2024-07-24 22:11:33.533224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.471 [2024-07-24 22:11:33.533235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:77512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.471 [2024-07-24 22:11:33.533244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.471 [2024-07-24 22:11:33.533254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:77520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.471 [2024-07-24 22:11:33.533263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.471 [2024-07-24 22:11:33.533274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:77528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.471 [2024-07-24 22:11:33.533283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.471 [2024-07-24 22:11:33.533293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:77536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.471 [2024-07-24 22:11:33.533304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.471 [2024-07-24 22:11:33.533314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:77544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.471 [2024-07-24 22:11:33.533324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.471 [2024-07-24 22:11:33.533334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:77552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.471 [2024-07-24 22:11:33.533343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.471 [2024-07-24 22:11:33.533353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:77560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.471 [2024-07-24 22:11:33.533364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.471 [2024-07-24 22:11:33.533375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:77568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.471 [2024-07-24 22:11:33.533384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.471 [2024-07-24 22:11:33.533394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:77576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.471 [2024-07-24 22:11:33.533403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.471 [2024-07-24 22:11:33.533414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:77584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.471 [2024-07-24 22:11:33.533423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.471 [2024-07-24 22:11:33.533433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:77592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.471 [2024-07-24 22:11:33.533442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.471 [2024-07-24 22:11:33.533452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:77600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.471 [2024-07-24 22:11:33.533461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.471 [2024-07-24 22:11:33.533472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:77608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.471 [2024-07-24 22:11:33.533481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.471 [2024-07-24 22:11:33.533491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:77616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.471 [2024-07-24 22:11:33.533500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.471 [2024-07-24 22:11:33.533510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:77624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.471 [2024-07-24 22:11:33.533520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.471 [2024-07-24 22:11:33.533530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:77632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.471 [2024-07-24 22:11:33.533539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.471 [2024-07-24 22:11:33.533549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:77640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.471 [2024-07-24 22:11:33.533559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.471 [2024-07-24 22:11:33.533569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:77648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.471 [2024-07-24 22:11:33.533578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.471 [2024-07-24 22:11:33.533588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:77656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.471 [2024-07-24 22:11:33.533597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.471 [2024-07-24 22:11:33.533607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:77664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.471 [2024-07-24 22:11:33.533617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.472 [2024-07-24 22:11:33.533628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:77672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.472 [2024-07-24 22:11:33.533637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.472 [2024-07-24 22:11:33.533647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:77680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.472 [2024-07-24 22:11:33.533656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.472 [2024-07-24 22:11:33.533666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:78072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.472 [2024-07-24 22:11:33.533676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.472 [2024-07-24 22:11:33.533687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:78080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.472 [2024-07-24 22:11:33.533696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.472 [2024-07-24 22:11:33.533706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.472 [2024-07-24 22:11:33.533720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.472 [2024-07-24 22:11:33.533731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.472 [2024-07-24 22:11:33.533740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.472 [2024-07-24 22:11:33.533751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.472 [2024-07-24 22:11:33.533760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.472 [2024-07-24 22:11:33.533770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.472 [2024-07-24 22:11:33.533779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.472 [2024-07-24 22:11:33.533789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:78120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.472 [2024-07-24 22:11:33.533798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.472 [2024-07-24 22:11:33.533809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.472 [2024-07-24 22:11:33.533818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.472 [2024-07-24 22:11:33.533828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.472 [2024-07-24 22:11:33.533837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.472 [2024-07-24 22:11:33.533847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.472 [2024-07-24 22:11:33.533857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.472 [2024-07-24 22:11:33.533868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:78152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.472 [2024-07-24 22:11:33.533877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.472 [2024-07-24 22:11:33.533888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.472 [2024-07-24 22:11:33.533897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.472 [2024-07-24 22:11:33.533907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.472 [2024-07-24 22:11:33.533916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.472 [2024-07-24 22:11:33.533926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.472 [2024-07-24 22:11:33.533935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.472 [2024-07-24 22:11:33.533945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:78184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.472 [2024-07-24 22:11:33.533954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.472 [2024-07-24 22:11:33.533965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:78192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.472 [2024-07-24 22:11:33.533974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.472 [2024-07-24 22:11:33.533984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:78200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.472 [2024-07-24 22:11:33.533994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.472 [2024-07-24 22:11:33.534004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.472 [2024-07-24 22:11:33.534013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.472 [2024-07-24 22:11:33.534023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.472 [2024-07-24 22:11:33.534032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.472 [2024-07-24 22:11:33.534043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:78224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.472 [2024-07-24 22:11:33.534052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.472 [2024-07-24 22:11:33.534062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:78232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.472 [2024-07-24 22:11:33.534070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.472 [2024-07-24 22:11:33.534081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.472 [2024-07-24 22:11:33.534090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.472 [2024-07-24 22:11:33.534100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:78248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.472 [2024-07-24 22:11:33.534110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.472 [2024-07-24 22:11:33.534120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.472 [2024-07-24 22:11:33.534129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.472 [2024-07-24 22:11:33.534140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.472 [2024-07-24 22:11:33.534154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.472 [2024-07-24 22:11:33.534164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:77696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.472 [2024-07-24 22:11:33.534173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.472 [2024-07-24 22:11:33.534184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:77704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.472 [2024-07-24 22:11:33.534192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.472 [2024-07-24 22:11:33.534203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.472 [2024-07-24 22:11:33.534212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.472 [2024-07-24 22:11:33.534223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:77720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.472 [2024-07-24 22:11:33.534231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.472 [2024-07-24 22:11:33.534242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:77728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.472 [2024-07-24 22:11:33.534251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.472 [2024-07-24 22:11:33.534261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:77736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.472 [2024-07-24 22:11:33.534270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.472 [2024-07-24 22:11:33.534280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:77744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.472 [2024-07-24 22:11:33.534289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.472 [2024-07-24 22:11:33.534299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:78264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.472 [2024-07-24 22:11:33.534309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.472 [2024-07-24 22:11:33.534319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.472 [2024-07-24 22:11:33.534328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.472 [2024-07-24 22:11:33.534338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.472 [2024-07-24 22:11:33.534348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.472 [2024-07-24 22:11:33.534359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:78288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.472 [2024-07-24 22:11:33.534368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.472 [2024-07-24 22:11:33.534378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:78296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.472 [2024-07-24 22:11:33.534387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.472 [2024-07-24 22:11:33.534398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.472 [2024-07-24 22:11:33.534407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.473 [2024-07-24 22:11:33.534417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.473 [2024-07-24 22:11:33.534425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.473 [2024-07-24 22:11:33.534436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:78320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.473 [2024-07-24 22:11:33.534445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.473 [2024-07-24 22:11:33.534455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:78328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.473 [2024-07-24 22:11:33.534465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.473 [2024-07-24 22:11:33.534476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.473 [2024-07-24 22:11:33.534485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.473 [2024-07-24 22:11:33.534495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:78344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.473 [2024-07-24 22:11:33.534504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.473 [2024-07-24 22:11:33.534514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:78352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.473 [2024-07-24 22:11:33.534523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.473 [2024-07-24 22:11:33.534533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:78360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.473 [2024-07-24 22:11:33.534542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.473 [2024-07-24 22:11:33.534553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.473 [2024-07-24 22:11:33.534561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.473 [2024-07-24 22:11:33.534571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:78376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.473 [2024-07-24 22:11:33.534580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.473 [2024-07-24 22:11:33.534601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.473 [2024-07-24 22:11:33.534609] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.473 [2024-07-24 22:11:33.534618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78384 len:8 PRP1 0x0 PRP2 0x0 00:24:05.473 [2024-07-24 22:11:33.534627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.473 [2024-07-24 22:11:33.534671] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2533290 was disconnected and freed. reset controller. 00:24:05.473 [2024-07-24 22:11:33.534682] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:05.473 [2024-07-24 22:11:33.534703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.473 [2024-07-24 22:11:33.534713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.473 [2024-07-24 22:11:33.534726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.473 [2024-07-24 22:11:33.534735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.473 [2024-07-24 22:11:33.534745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.473 [2024-07-24 22:11:33.534754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.473 [2024-07-24 22:11:33.534764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.473 [2024-07-24 22:11:33.534773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.473 [2024-07-24 22:11:33.534782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:05.473 [2024-07-24 22:11:33.537447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:05.473 [2024-07-24 22:11:33.537476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x250f590 (9): Bad file descriptor 00:24:05.473 [2024-07-24 22:11:33.563778] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:05.473 [2024-07-24 22:11:37.926469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.473 [2024-07-24 22:11:37.926511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.473 [2024-07-24 22:11:37.926529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.473 [2024-07-24 22:11:37.926540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.473 [2024-07-24 22:11:37.926551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.473 [2024-07-24 22:11:37.926561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.473 [2024-07-24 22:11:37.926572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.473 [2024-07-24 22:11:37.926582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.473 [2024-07-24 22:11:37.926593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.473 [2024-07-24 22:11:37.926602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.473 [2024-07-24 22:11:37.926613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.473 [2024-07-24 22:11:37.926626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.473 [2024-07-24 22:11:37.926637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.473 [2024-07-24 22:11:37.926646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.473 [2024-07-24 22:11:37.926657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.473 [2024-07-24 22:11:37.926666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.473 [2024-07-24 22:11:37.926676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.473 [2024-07-24 22:11:37.926685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.473 [2024-07-24 22:11:37.926696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.473 [2024-07-24 22:11:37.926705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.473 [2024-07-24 22:11:37.926721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.473 [2024-07-24 22:11:37.926731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.473 [2024-07-24 22:11:37.926742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.473 [2024-07-24 22:11:37.926752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.473 [2024-07-24 22:11:37.926762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.473 [2024-07-24 22:11:37.926772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.473 [2024-07-24 22:11:37.926782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.473 [2024-07-24 22:11:37.926791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.473 [2024-07-24 22:11:37.926802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.473 [2024-07-24 22:11:37.926811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.473 [2024-07-24 22:11:37.926822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.473 [2024-07-24 22:11:37.926832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.473 [2024-07-24 22:11:37.926842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.473 [2024-07-24 22:11:37.926852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.473 [2024-07-24 22:11:37.926863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.473 [2024-07-24 22:11:37.926872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.473 [2024-07-24 22:11:37.926885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:4408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.473 [2024-07-24 22:11:37.926894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.473 [2024-07-24 22:11:37.926904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.473 [2024-07-24 22:11:37.926913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.473 [2024-07-24 22:11:37.926924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.473 [2024-07-24 22:11:37.926933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.474 [2024-07-24 22:11:37.926944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.474 [2024-07-24 22:11:37.926953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.474 [2024-07-24 22:11:37.926963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.474 [2024-07-24 22:11:37.926972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.474 [2024-07-24 22:11:37.926983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.474 [2024-07-24 22:11:37.926992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.474 [2024-07-24 22:11:37.927002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.474 [2024-07-24 22:11:37.927012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.474 [2024-07-24 22:11:37.927022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.474 [2024-07-24 22:11:37.927031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.474 [2024-07-24 22:11:37.927041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.474 [2024-07-24 22:11:37.927051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.474 [2024-07-24 22:11:37.927061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.474 [2024-07-24 22:11:37.927070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.474 [2024-07-24 22:11:37.927081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.474 [2024-07-24 22:11:37.927090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.474 [2024-07-24 22:11:37.927100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.474 [2024-07-24 22:11:37.927110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.474 [2024-07-24 22:11:37.927121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.474 [2024-07-24 22:11:37.927132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.474 [2024-07-24 22:11:37.927142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.474 [2024-07-24 22:11:37.927151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.474 [2024-07-24 22:11:37.927162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.474 [2024-07-24 22:11:37.927172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.474 [2024-07-24 22:11:37.927183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.474 [2024-07-24 22:11:37.927192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.474 [2024-07-24 22:11:37.927202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.474 [2024-07-24 22:11:37.927212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.474 [2024-07-24 22:11:37.927222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.474 [2024-07-24 22:11:37.927232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.474 [2024-07-24 22:11:37.927243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.474 [2024-07-24 22:11:37.927252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.474 [2024-07-24 22:11:37.927263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.474 [2024-07-24 22:11:37.927272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.474 [2024-07-24 22:11:37.927283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.474 [2024-07-24 22:11:37.927292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.474 [2024-07-24 22:11:37.927302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.474 [2024-07-24 22:11:37.927312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.474 [2024-07-24 22:11:37.927322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.474 [2024-07-24 22:11:37.927331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.474 [2024-07-24 22:11:37.927342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.474 [2024-07-24 22:11:37.927351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.474 [2024-07-24 22:11:37.927361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.474 [2024-07-24 22:11:37.927370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.474 [2024-07-24 22:11:37.927382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.474 [2024-07-24 22:11:37.927392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.474 [2024-07-24 22:11:37.927403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.474 [2024-07-24 22:11:37.927412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.474 [2024-07-24 22:11:37.927422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.474 [2024-07-24 22:11:37.927431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.474 [2024-07-24 22:11:37.927442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.474 [2024-07-24 22:11:37.927451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.474 [2024-07-24 22:11:37.927461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.474 [2024-07-24 22:11:37.927471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.474 [2024-07-24 22:11:37.927481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.474 [2024-07-24 22:11:37.927491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.474 [2024-07-24 22:11:37.927501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.474 [2024-07-24 22:11:37.927511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.474 [2024-07-24 22:11:37.927521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.474 [2024-07-24 22:11:37.927530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.474 [2024-07-24 22:11:37.927541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.474 [2024-07-24 22:11:37.927551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.474 [2024-07-24 22:11:37.927561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.474 [2024-07-24 22:11:37.927570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.474 [2024-07-24 22:11:37.927580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.474 [2024-07-24 22:11:37.927590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.474 [2024-07-24 22:11:37.927600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:4824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.474 [2024-07-24 22:11:37.927609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.474 [2024-07-24 22:11:37.927619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.474 [2024-07-24 22:11:37.927628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.474 [2024-07-24 22:11:37.927640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.474 [2024-07-24 22:11:37.927650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.474 [2024-07-24 22:11:37.927660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.474 [2024-07-24 22:11:37.927669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.474 [2024-07-24 22:11:37.927680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.474 [2024-07-24 22:11:37.927688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.474 [2024-07-24 22:11:37.927699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.474 [2024-07-24 22:11:37.927708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.475 [2024-07-24 22:11:37.927722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.475 [2024-07-24 22:11:37.927731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.475 [2024-07-24 22:11:37.927742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.475 [2024-07-24 22:11:37.927751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.475 [2024-07-24 22:11:37.927761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.475 [2024-07-24 22:11:37.927770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.475 [2024-07-24 22:11:37.927781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.475 [2024-07-24 22:11:37.927790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.475 [2024-07-24 22:11:37.927801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.475 [2024-07-24 22:11:37.927811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.475 [2024-07-24 22:11:37.927821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.475 [2024-07-24 22:11:37.927830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.475 [2024-07-24 22:11:37.927841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.475 [2024-07-24 22:11:37.927850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.475 [2024-07-24 22:11:37.927861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.475 [2024-07-24 22:11:37.927870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.475 [2024-07-24 22:11:37.927880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.475 [2024-07-24 22:11:37.927891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.475 [2024-07-24 22:11:37.927902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.475 [2024-07-24 22:11:37.927911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.475 [2024-07-24 22:11:37.927921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.475 [2024-07-24 22:11:37.927930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.475 [2024-07-24 22:11:37.927941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.475 [2024-07-24 22:11:37.927950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.475 [2024-07-24 22:11:37.927960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.475 [2024-07-24 22:11:37.927969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.475 [2024-07-24 22:11:37.927980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.475 [2024-07-24 22:11:37.927989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.475 [2024-07-24 22:11:37.927999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.475 [2024-07-24 22:11:37.928008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.475 [2024-07-24 22:11:37.928019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.475 [2024-07-24 22:11:37.928028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.475 [2024-07-24 22:11:37.928039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.475 [2024-07-24 22:11:37.928048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.475 [2024-07-24 22:11:37.928058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.475 [2024-07-24 22:11:37.928068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.475 [2024-07-24 22:11:37.928078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.475 [2024-07-24 22:11:37.928087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.475 [2024-07-24 22:11:37.928098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.475 [2024-07-24 22:11:37.928106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.475 [2024-07-24 22:11:37.928117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.475 [2024-07-24 22:11:37.928127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.475 [2024-07-24 22:11:37.928137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.475 [2024-07-24 22:11:37.928148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.475 [2024-07-24 22:11:37.928158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.475 [2024-07-24 22:11:37.928167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.475 [2024-07-24 22:11:37.928177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.475 [2024-07-24 22:11:37.928187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.475 [2024-07-24 22:11:37.928198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.475 [2024-07-24 22:11:37.928206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.475 [2024-07-24 22:11:37.928217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.475 [2024-07-24 22:11:37.928226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.475 [2024-07-24 22:11:37.928237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.475 [2024-07-24 22:11:37.928247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.475 [2024-07-24 22:11:37.928258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.475 [2024-07-24 22:11:37.928267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.475 [2024-07-24 22:11:37.928277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.475 [2024-07-24 22:11:37.928286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.475 [2024-07-24 22:11:37.928297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.475 [2024-07-24 22:11:37.928306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.475 [2024-07-24 22:11:37.928316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.475 [2024-07-24 22:11:37.928325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.475 [2024-07-24 22:11:37.928336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.475 [2024-07-24 22:11:37.928345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.475 [2024-07-24 22:11:37.928355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.475 [2024-07-24 22:11:37.928365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.476 [2024-07-24 22:11:37.928375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.476 [2024-07-24 22:11:37.928384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.476 [2024-07-24 22:11:37.928398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.476 [2024-07-24 22:11:37.928407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.476 [2024-07-24 22:11:37.928418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.476 [2024-07-24 22:11:37.928427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.476 [2024-07-24 22:11:37.928438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.476 [2024-07-24 22:11:37.928447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.476 [2024-07-24 22:11:37.928462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.476 [2024-07-24 22:11:37.928471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.476 [2024-07-24 22:11:37.928481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.476 [2024-07-24 22:11:37.928491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.476 [2024-07-24 22:11:37.928501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.476 [2024-07-24 22:11:37.928510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.476 [2024-07-24 22:11:37.928521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.476 [2024-07-24 22:11:37.928530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.476 [2024-07-24 22:11:37.928540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.476 [2024-07-24 22:11:37.928550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.476 [2024-07-24 22:11:37.928560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.476 [2024-07-24 22:11:37.928569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.476 [2024-07-24 22:11:37.928580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.476 [2024-07-24 22:11:37.928589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.476 [2024-07-24 22:11:37.928600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.476 [2024-07-24 22:11:37.928609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.476 [2024-07-24 22:11:37.928619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.476 [2024-07-24 22:11:37.928628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.476 [2024-07-24 22:11:37.928638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.476 [2024-07-24 22:11:37.928649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.476 [2024-07-24 22:11:37.928659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.476 [2024-07-24 22:11:37.928669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.476 [2024-07-24 22:11:37.928679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.476 [2024-07-24 22:11:37.928688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.476 [2024-07-24 22:11:37.928699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.476 [2024-07-24 22:11:37.928707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.476 [2024-07-24 22:11:37.928721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.476 [2024-07-24 22:11:37.928731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.476 [2024-07-24 22:11:37.928741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.476 [2024-07-24 22:11:37.928750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.476 [2024-07-24 22:11:37.928760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.476 [2024-07-24 22:11:37.928769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.476 [2024-07-24 22:11:37.928781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.476 [2024-07-24 22:11:37.928790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.476 [2024-07-24 22:11:37.928801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.476 [2024-07-24 22:11:37.928810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.476 [2024-07-24 22:11:37.928820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.476 [2024-07-24 22:11:37.928829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.476 [2024-07-24 22:11:37.928840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.476 [2024-07-24 22:11:37.928849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.476 [2024-07-24 22:11:37.928860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:4616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.476 [2024-07-24 22:11:37.928869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.476 [2024-07-24 22:11:37.928879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.476 [2024-07-24 22:11:37.928888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.476 [2024-07-24 22:11:37.928899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.476 [2024-07-24 22:11:37.928909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.476 [2024-07-24 22:11:37.928920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.476 [2024-07-24 22:11:37.928929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.476 [2024-07-24 22:11:37.928939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.476 [2024-07-24 22:11:37.928948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.476 [2024-07-24 22:11:37.928959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.476 [2024-07-24 22:11:37.928968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.476 [2024-07-24 22:11:37.928978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.476 [2024-07-24 22:11:37.928987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.476 [2024-07-24 22:11:37.928997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.476 [2024-07-24 22:11:37.929006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.476 [2024-07-24 22:11:37.929017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.476 [2024-07-24 22:11:37.929026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.476 [2024-07-24 22:11:37.929037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.476 [2024-07-24 22:11:37.929045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.476 [2024-07-24 22:11:37.929067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.476 [2024-07-24 22:11:37.929075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.476 [2024-07-24 22:11:37.929083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5288 len:8 PRP1 0x0 PRP2 0x0 00:24:05.476 [2024-07-24 22:11:37.929092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.476 [2024-07-24 22:11:37.929141] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2532f50 was disconnected and freed. reset controller. 00:24:05.476 [2024-07-24 22:11:37.929152] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:05.476 [2024-07-24 22:11:37.929175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.476 [2024-07-24 22:11:37.929185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.476 [2024-07-24 22:11:37.929195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.476 [2024-07-24 22:11:37.929204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.476 [2024-07-24 22:11:37.929214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.476 [2024-07-24 22:11:37.929225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.477 [2024-07-24 22:11:37.929235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.477 [2024-07-24 22:11:37.929244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.477 [2024-07-24 22:11:37.929253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:05.477 [2024-07-24 22:11:37.931941] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:05.477 [2024-07-24 22:11:37.931974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x250f590 (9): Bad file descriptor 00:24:05.477 [2024-07-24 22:11:37.961601] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:05.477 00:24:05.477 Latency(us) 00:24:05.477 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.477 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:05.477 Verification LBA range: start 0x0 length 0x4000 00:24:05.477 NVMe0n1 : 15.00 12185.49 47.60 392.23 0.00 10155.73 619.32 11377.05 00:24:05.477 =================================================================================================================== 00:24:05.477 Total : 12185.49 47.60 392.23 0.00 10155.73 619.32 11377.05 00:24:05.477 Received shutdown signal, test time was about 15.000000 seconds 00:24:05.477 00:24:05.477 Latency(us) 00:24:05.477 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.477 =================================================================================================================== 00:24:05.477 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:05.477 22:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:05.477 22:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:05.477 22:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:05.477 22:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2789086 00:24:05.477 22:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:05.477 22:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2789086 /var/tmp/bdevperf.sock 00:24:05.477 22:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2789086 ']' 00:24:05.477 22:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:05.477 22:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:05.477 22:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:05.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:05.477 22:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:05.477 22:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:06.046 22:11:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:06.046 22:11:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:06.046 22:11:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:06.046 [2024-07-24 22:11:45.238177] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:06.305 22:11:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:06.305 [2024-07-24 22:11:45.406664] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:06.305 22:11:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:06.564 NVMe0n1 00:24:06.564 22:11:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:06.824 00:24:07.083 22:11:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:07.342 00:24:07.342 22:11:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:07.342 22:11:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:07.602 22:11:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:07.602 22:11:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:10.946 22:11:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:10.946 22:11:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:10.946 22:11:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2789998 00:24:10.946 22:11:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:10.946 22:11:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2789998 00:24:11.884 0 00:24:12.144 22:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:12.144 [2024-07-24 22:11:44.288263] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:24:12.144 [2024-07-24 22:11:44.288319] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2789086 ] 00:24:12.144 EAL: No free 2048 kB hugepages reported on node 1 00:24:12.144 [2024-07-24 22:11:44.359558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.144 [2024-07-24 22:11:44.423372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:12.144 [2024-07-24 22:11:46.774076] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:12.144 [2024-07-24 22:11:46.774124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.144 [2024-07-24 22:11:46.774138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.144 [2024-07-24 22:11:46.774149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.144 [2024-07-24 22:11:46.774158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.144 [2024-07-24 22:11:46.774168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.144 [2024-07-24 22:11:46.774177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.144 [2024-07-24 22:11:46.774187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.144 [2024-07-24 22:11:46.774196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.144 [2024-07-24 22:11:46.774210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:12.144 [2024-07-24 22:11:46.774235] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:12.144 [2024-07-24 22:11:46.774251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x963590 (9): Bad file descriptor 00:24:12.144 [2024-07-24 22:11:46.783382] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:12.144 Running I/O for 1 seconds... 00:24:12.144 00:24:12.144 Latency(us) 00:24:12.144 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.144 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:12.144 Verification LBA range: start 0x0 length 0x4000 00:24:12.144 NVMe0n1 : 1.01 11592.25 45.28 0.00 0.00 10983.44 2070.94 14260.63 00:24:12.144 =================================================================================================================== 00:24:12.144 Total : 11592.25 45.28 0.00 0.00 10983.44 2070.94 14260.63 00:24:12.144 22:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:12.144 22:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:12.144 22:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:12.404 22:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:12.404 22:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:12.662 22:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:12.662 22:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:15.953 22:11:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:15.953 22:11:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:15.953 22:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2789086 00:24:15.953 22:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2789086 ']' 00:24:15.953 22:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2789086 00:24:15.953 22:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:24:15.953 22:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:15.953 22:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2789086 00:24:15.953 22:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:15.953 22:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:15.953 22:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2789086' 00:24:15.953 killing process with pid 2789086 00:24:15.953 22:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2789086 00:24:15.953 22:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2789086 00:24:16.212 22:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:16.212 22:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:16.471 22:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:16.471 22:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:16.471 22:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:16.471 22:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:16.471 22:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:24:16.471 22:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:16.471 22:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:24:16.471 22:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:16.471 22:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:16.471 rmmod nvme_tcp 00:24:16.471 rmmod nvme_fabrics 00:24:16.471 rmmod nvme_keyring 00:24:16.471 22:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:16.471 22:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:24:16.471 22:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:24:16.471 22:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2785869 ']' 00:24:16.471 22:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2785869 00:24:16.472 22:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2785869 ']' 00:24:16.472 22:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2785869 00:24:16.472 22:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:24:16.472 22:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:16.472 22:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2785869 00:24:16.472 22:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:16.472 22:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:16.472 22:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2785869' 00:24:16.472 killing process with pid 2785869 00:24:16.472 22:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2785869 00:24:16.472 22:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2785869 00:24:16.731 22:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:16.731 22:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:16.731 22:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:16.731 22:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:16.731 22:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:16.731 22:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.731 22:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:16.731 22:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.636 22:11:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:18.896 00:24:18.896 real 0m39.706s 00:24:18.896 user 2m2.662s 00:24:18.896 sys 0m9.932s 00:24:18.896 22:11:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:18.896 22:11:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:18.896 ************************************ 00:24:18.896 END TEST nvmf_failover 00:24:18.896 ************************************ 00:24:18.896 22:11:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:18.896 22:11:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:18.896 22:11:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:18.896 22:11:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.896 ************************************ 00:24:18.896 START TEST nvmf_host_discovery 00:24:18.896 ************************************ 00:24:18.896 22:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:18.896 * Looking for test storage... 00:24:18.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:18.896 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:18.896 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:18.896 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:18.896 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:18.896 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:18.896 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:18.896 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:18.896 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:18.896 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:18.896 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:18.896 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:18.896 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:18.896 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:18.896 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:24:18.896 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:18.896 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:18.896 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:18.896 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:18.896 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:18.896 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:18.896 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:18.896 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:18.896 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.896 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.896 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.896 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:18.896 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.896 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:24:18.896 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:18.896 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:18.896 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:18.896 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:18.896 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:18.896 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:18.896 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:18.896 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:18.897 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:18.897 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:18.897 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:18.897 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:18.897 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:18.897 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:18.897 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:18.897 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:18.897 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:18.897 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:18.897 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:18.897 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:18.897 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:18.897 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:18.897 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.897 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:18.897 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:18.897 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:24:18.897 22:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:25.466 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:25.466 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:24:25.466 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:25.466 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:25.466 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:25.466 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:25.466 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:25.466 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:24:25.466 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:25.466 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:24:25.466 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:24:25.466 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:24:25.466 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:24:25.466 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:24:25.466 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:24:25.466 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:25.466 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:25.466 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:25.467 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:25.467 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:25.467 Found net devices under 0000:af:00.0: cvl_0_0 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:25.467 Found net devices under 0000:af:00.1: cvl_0_1 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:25.467 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:25.467 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:25.467 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:24:25.467 00:24:25.467 --- 10.0.0.2 ping statistics --- 00:24:25.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:25.467 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:24:25.726 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:25.726 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:25.726 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:24:25.726 00:24:25.726 --- 10.0.0.1 ping statistics --- 00:24:25.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:25.726 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:24:25.726 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:25.726 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:24:25.726 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:25.726 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:25.726 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:25.726 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:25.726 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:25.726 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:25.726 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:25.726 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:25.726 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:25.726 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:25.726 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:25.726 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=2794982 00:24:25.726 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:25.726 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 2794982 00:24:25.726 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 2794982 ']' 00:24:25.726 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:25.726 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:25.726 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:25.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:25.726 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:25.726 22:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:25.726 [2024-07-24 22:12:04.789893] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:24:25.726 [2024-07-24 22:12:04.789944] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:25.726 EAL: No free 2048 kB hugepages reported on node 1 00:24:25.726 [2024-07-24 22:12:04.866503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.985 [2024-07-24 22:12:04.940161] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:25.985 [2024-07-24 22:12:04.940206] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:25.985 [2024-07-24 22:12:04.940216] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:25.985 [2024-07-24 22:12:04.940228] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:25.985 [2024-07-24 22:12:04.940235] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:25.985 [2024-07-24 22:12:04.940256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:26.553 22:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:26.553 22:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:24:26.553 22:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:26.553 22:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:26.553 22:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.553 22:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:26.553 22:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:26.553 22:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.553 22:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.553 [2024-07-24 22:12:05.635257] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:26.553 22:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.553 22:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:26.553 22:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.553 22:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.553 [2024-07-24 22:12:05.647435] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:26.553 22:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.553 22:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:26.553 22:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.553 22:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.553 null0 00:24:26.553 22:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.553 22:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:26.553 22:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.553 22:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.553 null1 00:24:26.553 22:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.553 22:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:26.553 22:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.553 22:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.553 22:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.553 22:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2795286 00:24:26.553 22:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:26.553 22:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2795286 /tmp/host.sock 00:24:26.553 22:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 2795286 ']' 00:24:26.553 22:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:24:26.553 22:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:26.553 22:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:26.553 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:26.553 22:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:26.553 22:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.553 [2024-07-24 22:12:05.724942] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:24:26.553 [2024-07-24 22:12:05.724989] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2795286 ] 00:24:26.553 EAL: No free 2048 kB hugepages reported on node 1 00:24:26.812 [2024-07-24 22:12:05.795368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.812 [2024-07-24 22:12:05.870566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.381 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:27.381 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:24:27.381 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:27.381 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:27.381 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.381 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:27.381 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.381 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:27.381 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.381 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:27.381 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.381 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:24:27.381 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:24:27.381 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:27.381 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:27.381 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:27.381 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.381 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:27.381 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:27.381 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.381 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:27.381 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:24:27.381 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:27.381 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:27.381 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:27.381 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:27.381 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.381 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:27.641 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.641 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:27.641 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:27.641 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.641 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:27.642 [2024-07-24 22:12:06.830515] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:27.642 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.902 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:27.902 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:24:27.902 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:27.902 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:27.902 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.902 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:27.902 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:27.902 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:27.902 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.902 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:27.902 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:27.902 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:27.902 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:27.902 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:27.902 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:27.902 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:27.902 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:27.902 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:27.902 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:27.902 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:27.902 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.902 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:27.902 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.902 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:27.902 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:24:27.902 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:27.902 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:27.902 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:27.902 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.902 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:27.902 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.902 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:27.902 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:27.902 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:27.902 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:27.902 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:27.902 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:24:27.902 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:27.902 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:27.902 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.902 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:27.902 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:27.902 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:27.902 22:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.902 22:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:24:27.902 22:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:24:28.471 [2024-07-24 22:12:07.545032] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:28.471 [2024-07-24 22:12:07.545052] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:28.471 [2024-07-24 22:12:07.545065] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:28.471 [2024-07-24 22:12:07.632331] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:28.730 [2024-07-24 22:12:07.737812] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:28.730 [2024-07-24 22:12:07.737832] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.990 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:29.250 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.250 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:29.250 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:24:29.250 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:29.250 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:29.250 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:29.250 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.250 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:29.250 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.250 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:29.250 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:29.250 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:29.250 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:29.250 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:29.250 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:24:29.251 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:29.251 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:29.251 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:29.251 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:29.251 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.251 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:29.563 [2024-07-24 22:12:08.555287] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:29.563 [2024-07-24 22:12:08.555935] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:29.563 [2024-07-24 22:12:08.555958] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:29.563 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:29.564 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:29.564 [2024-07-24 22:12:08.642515] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:24:29.564 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.564 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:29.564 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:29.564 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:29.564 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:29.564 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:29.564 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:29.564 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:29.564 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:24:29.564 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:29.564 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:29.564 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:29.564 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:29.564 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.564 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:29.564 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.564 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:29.564 22:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:24:29.828 [2024-07-24 22:12:08.861617] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:29.828 [2024-07-24 22:12:08.861635] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:29.828 [2024-07-24 22:12:08.861648] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:30.767 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:30.767 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:30.767 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:24:30.767 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:30.767 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:30.767 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:30.767 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.767 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:30.767 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:30.767 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.767 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:30.767 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:30.767 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:30.767 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:30.767 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:30.767 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:30.767 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:30.767 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:30.767 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:30.767 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:30.767 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:30.767 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:30.767 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.767 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:30.767 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.767 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:30.767 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:30.767 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:30.767 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:30.767 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:30.767 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.767 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:30.767 [2024-07-24 22:12:09.827254] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:30.767 [2024-07-24 22:12:09.827274] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:30.767 [2024-07-24 22:12:09.828003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:30.767 [2024-07-24 22:12:09.828024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.767 [2024-07-24 22:12:09.828034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:30.767 [2024-07-24 22:12:09.828043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.767 [2024-07-24 22:12:09.828054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:30.768 [2024-07-24 22:12:09.828063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.768 [2024-07-24 22:12:09.828073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:30.768 [2024-07-24 22:12:09.828081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.768 [2024-07-24 22:12:09.828090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd75fd0 is same with the state(5) to be set 00:24:30.768 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.768 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:30.768 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:30.768 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:30.768 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:30.768 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:30.768 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:24:30.768 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:30.768 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:30.768 [2024-07-24 22:12:09.838015] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd75fd0 (9): Bad file descriptor 00:24:30.768 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:30.768 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.768 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:30.768 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:30.768 [2024-07-24 22:12:09.848051] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:30.768 [2024-07-24 22:12:09.848437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.768 [2024-07-24 22:12:09.848453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd75fd0 with addr=10.0.0.2, port=4420 00:24:30.768 [2024-07-24 22:12:09.848463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd75fd0 is same with the state(5) to be set 00:24:30.768 [2024-07-24 22:12:09.848477] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd75fd0 (9): Bad file descriptor 00:24:30.768 [2024-07-24 22:12:09.848497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:30.768 [2024-07-24 22:12:09.848506] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:30.768 [2024-07-24 22:12:09.848516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:30.768 [2024-07-24 22:12:09.848531] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.768 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.768 [2024-07-24 22:12:09.858108] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:30.768 [2024-07-24 22:12:09.858382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.768 [2024-07-24 22:12:09.858396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd75fd0 with addr=10.0.0.2, port=4420 00:24:30.768 [2024-07-24 22:12:09.858406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd75fd0 is same with the state(5) to be set 00:24:30.768 [2024-07-24 22:12:09.858418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd75fd0 (9): Bad file descriptor 00:24:30.768 [2024-07-24 22:12:09.858436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:30.768 [2024-07-24 22:12:09.858445] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:30.768 [2024-07-24 22:12:09.858454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:30.768 [2024-07-24 22:12:09.858466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.768 [2024-07-24 22:12:09.868159] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:30.768 [2024-07-24 22:12:09.868482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.768 [2024-07-24 22:12:09.868496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd75fd0 with addr=10.0.0.2, port=4420 00:24:30.768 [2024-07-24 22:12:09.868504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd75fd0 is same with the state(5) to be set 00:24:30.768 [2024-07-24 22:12:09.868516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd75fd0 (9): Bad file descriptor 00:24:30.768 [2024-07-24 22:12:09.868528] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:30.768 [2024-07-24 22:12:09.868536] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:30.768 [2024-07-24 22:12:09.868545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:30.768 [2024-07-24 22:12:09.868555] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.768 [2024-07-24 22:12:09.878210] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:30.768 [2024-07-24 22:12:09.878600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.768 [2024-07-24 22:12:09.878616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd75fd0 with addr=10.0.0.2, port=4420 00:24:30.768 [2024-07-24 22:12:09.878625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd75fd0 is same with the state(5) to be set 00:24:30.768 [2024-07-24 22:12:09.878638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd75fd0 (9): Bad file descriptor 00:24:30.768 [2024-07-24 22:12:09.878662] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:30.768 [2024-07-24 22:12:09.878672] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:30.768 [2024-07-24 22:12:09.878681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:30.768 [2024-07-24 22:12:09.878692] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.768 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.768 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:30.768 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:30.768 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:30.768 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:30.768 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:30.768 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:30.768 [2024-07-24 22:12:09.888268] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:30.768 [2024-07-24 22:12:09.888545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.768 [2024-07-24 22:12:09.888558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd75fd0 with addr=10.0.0.2, port=4420 00:24:30.768 [2024-07-24 22:12:09.888567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd75fd0 is same with the state(5) to be set 00:24:30.768 [2024-07-24 22:12:09.888580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd75fd0 (9): Bad file descriptor 00:24:30.768 [2024-07-24 22:12:09.888592] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:30.768 [2024-07-24 22:12:09.888600] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:30.768 [2024-07-24 22:12:09.888609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:30.768 [2024-07-24 22:12:09.888620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.768 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:24:30.768 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:30.768 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:30.768 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.768 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:30.768 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:30.768 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:30.768 [2024-07-24 22:12:09.898320] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:30.768 [2024-07-24 22:12:09.898634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.768 [2024-07-24 22:12:09.898649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd75fd0 with addr=10.0.0.2, port=4420 00:24:30.768 [2024-07-24 22:12:09.898658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd75fd0 is same with the state(5) to be set 00:24:30.768 [2024-07-24 22:12:09.898671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd75fd0 (9): Bad file descriptor 00:24:30.768 [2024-07-24 22:12:09.898684] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:30.768 [2024-07-24 22:12:09.898692] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:30.768 [2024-07-24 22:12:09.898701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:30.768 [2024-07-24 22:12:09.898712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.768 [2024-07-24 22:12:09.908376] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:30.768 [2024-07-24 22:12:09.908720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.768 [2024-07-24 22:12:09.908734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd75fd0 with addr=10.0.0.2, port=4420 00:24:30.768 [2024-07-24 22:12:09.908746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd75fd0 is same with the state(5) to be set 00:24:30.768 [2024-07-24 22:12:09.908759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd75fd0 (9): Bad file descriptor 00:24:30.768 [2024-07-24 22:12:09.908771] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:30.768 [2024-07-24 22:12:09.908780] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:30.769 [2024-07-24 22:12:09.908788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:30.769 [2024-07-24 22:12:09.908799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.769 [2024-07-24 22:12:09.918426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:30.769 [2024-07-24 22:12:09.918763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.769 [2024-07-24 22:12:09.918778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd75fd0 with addr=10.0.0.2, port=4420 00:24:30.769 [2024-07-24 22:12:09.918787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd75fd0 is same with the state(5) to be set 00:24:30.769 [2024-07-24 22:12:09.918800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd75fd0 (9): Bad file descriptor 00:24:30.769 [2024-07-24 22:12:09.918817] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:30.769 [2024-07-24 22:12:09.918826] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:30.769 [2024-07-24 22:12:09.918835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:30.769 [2024-07-24 22:12:09.918846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.769 [2024-07-24 22:12:09.928478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:30.769 [2024-07-24 22:12:09.928822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.769 [2024-07-24 22:12:09.928836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd75fd0 with addr=10.0.0.2, port=4420 00:24:30.769 [2024-07-24 22:12:09.928845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd75fd0 is same with the state(5) to be set 00:24:30.769 [2024-07-24 22:12:09.928858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd75fd0 (9): Bad file descriptor 00:24:30.769 [2024-07-24 22:12:09.928870] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:30.769 [2024-07-24 22:12:09.928878] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:30.769 [2024-07-24 22:12:09.928887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:30.769 [2024-07-24 22:12:09.928898] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.769 [2024-07-24 22:12:09.938529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:30.769 [2024-07-24 22:12:09.938871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.769 [2024-07-24 22:12:09.938885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd75fd0 with addr=10.0.0.2, port=4420 00:24:30.769 [2024-07-24 22:12:09.938894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd75fd0 is same with the state(5) to be set 00:24:30.769 [2024-07-24 22:12:09.938907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd75fd0 (9): Bad file descriptor 00:24:30.769 [2024-07-24 22:12:09.938924] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:30.769 [2024-07-24 22:12:09.938936] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:30.769 [2024-07-24 22:12:09.938945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:30.769 [2024-07-24 22:12:09.938955] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.769 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.769 [2024-07-24 22:12:09.948580] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:30.769 [2024-07-24 22:12:09.948870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.769 [2024-07-24 22:12:09.948884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd75fd0 with addr=10.0.0.2, port=4420 00:24:30.769 [2024-07-24 22:12:09.948894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd75fd0 is same with the state(5) to be set 00:24:30.769 [2024-07-24 22:12:09.948906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd75fd0 (9): Bad file descriptor 00:24:30.769 [2024-07-24 22:12:09.948917] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:30.769 [2024-07-24 22:12:09.948926] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:30.769 [2024-07-24 22:12:09.948934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:30.769 [2024-07-24 22:12:09.948945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.769 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:30.769 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:30.769 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:30.769 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:30.769 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:30.769 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:30.769 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:30.769 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:24:30.769 [2024-07-24 22:12:09.954477] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:30.769 [2024-07-24 22:12:09.954495] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:30.769 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:30.769 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:30.769 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.769 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:30.769 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:30.769 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:30.769 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.029 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:24:31.029 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:31.029 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:31.029 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:31.029 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:31.029 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:31.029 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:31.029 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:31.029 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:31.029 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:31.029 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:31.029 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.029 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:31.029 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:31.029 22:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.029 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:31.029 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:31.029 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:31.029 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:31.029 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:31.029 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.029 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:31.029 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.029 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:31.029 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:31.029 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:31.029 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:31.029 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:31.029 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:24:31.029 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:31.029 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:31.029 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.029 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:31.029 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:31.029 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:31.029 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.029 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:24:31.029 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:31.029 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:31.029 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:31.029 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:31.029 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:31.029 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:31.029 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:24:31.029 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:31.029 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:31.029 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.030 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:31.030 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:31.030 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:31.030 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.030 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:24:31.030 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:31.030 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:31.030 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:24:31.030 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:31.030 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:31.030 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:31.030 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:31.030 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:31.030 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:31.030 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:31.030 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:31.030 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.030 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:31.030 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.030 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:24:31.030 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:24:31.030 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:31.030 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:31.030 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:31.030 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.030 22:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:32.410 [2024-07-24 22:12:11.214043] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:32.410 [2024-07-24 22:12:11.214061] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:32.410 [2024-07-24 22:12:11.214073] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:32.410 [2024-07-24 22:12:11.340457] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:24:32.410 [2024-07-24 22:12:11.401054] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:32.410 [2024-07-24 22:12:11.401081] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:32.410 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.410 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:32.410 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:24:32.410 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:32.410 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:32.410 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:32.410 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:32.410 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:32.411 request: 00:24:32.411 { 00:24:32.411 "name": "nvme", 00:24:32.411 "trtype": "tcp", 00:24:32.411 "traddr": "10.0.0.2", 00:24:32.411 "adrfam": "ipv4", 00:24:32.411 "trsvcid": "8009", 00:24:32.411 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:32.411 "wait_for_attach": true, 00:24:32.411 "method": "bdev_nvme_start_discovery", 00:24:32.411 "req_id": 1 00:24:32.411 } 00:24:32.411 Got JSON-RPC error response 00:24:32.411 response: 00:24:32.411 { 00:24:32.411 "code": -17, 00:24:32.411 "message": "File exists" 00:24:32.411 } 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:32.411 request: 00:24:32.411 { 00:24:32.411 "name": "nvme_second", 00:24:32.411 "trtype": "tcp", 00:24:32.411 "traddr": "10.0.0.2", 00:24:32.411 "adrfam": "ipv4", 00:24:32.411 "trsvcid": "8009", 00:24:32.411 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:32.411 "wait_for_attach": true, 00:24:32.411 "method": "bdev_nvme_start_discovery", 00:24:32.411 "req_id": 1 00:24:32.411 } 00:24:32.411 Got JSON-RPC error response 00:24:32.411 response: 00:24:32.411 { 00:24:32.411 "code": -17, 00:24:32.411 "message": "File exists" 00:24:32.411 } 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:32.411 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:32.671 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.671 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:32.671 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:32.671 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:24:32.671 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:32.671 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:32.671 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:32.671 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:32.671 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:32.671 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:32.671 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.671 22:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:33.616 [2024-07-24 22:12:12.668954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.616 [2024-07-24 22:12:12.668983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd81840 with addr=10.0.0.2, port=8010 00:24:33.616 [2024-07-24 22:12:12.668997] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:33.616 [2024-07-24 22:12:12.669005] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:33.616 [2024-07-24 22:12:12.669013] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:34.553 [2024-07-24 22:12:13.671504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.553 [2024-07-24 22:12:13.671530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd81840 with addr=10.0.0.2, port=8010 00:24:34.553 [2024-07-24 22:12:13.671542] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:34.553 [2024-07-24 22:12:13.671551] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:34.553 [2024-07-24 22:12:13.671559] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:35.490 [2024-07-24 22:12:14.673588] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:24:35.490 request: 00:24:35.490 { 00:24:35.490 "name": "nvme_second", 00:24:35.490 "trtype": "tcp", 00:24:35.490 "traddr": "10.0.0.2", 00:24:35.490 "adrfam": "ipv4", 00:24:35.490 "trsvcid": "8010", 00:24:35.490 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:35.490 "wait_for_attach": false, 00:24:35.490 "attach_timeout_ms": 3000, 00:24:35.490 "method": "bdev_nvme_start_discovery", 00:24:35.490 "req_id": 1 00:24:35.490 } 00:24:35.490 Got JSON-RPC error response 00:24:35.490 response: 00:24:35.490 { 00:24:35.490 "code": -110, 00:24:35.490 "message": "Connection timed out" 00:24:35.490 } 00:24:35.490 22:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:35.490 22:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:24:35.490 22:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:35.490 22:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:35.490 22:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:35.490 22:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:35.490 22:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:35.490 22:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:35.490 22:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.490 22:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.490 22:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:35.490 22:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:35.490 22:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.750 22:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:35.750 22:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:35.750 22:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2795286 00:24:35.750 22:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:24:35.750 22:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:35.750 22:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:24:35.750 22:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:35.750 22:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:24:35.750 22:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:35.750 22:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:35.750 rmmod nvme_tcp 00:24:35.750 rmmod nvme_fabrics 00:24:35.750 rmmod nvme_keyring 00:24:35.750 22:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:35.750 22:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:24:35.750 22:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:24:35.750 22:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 2794982 ']' 00:24:35.750 22:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 2794982 00:24:35.750 22:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 2794982 ']' 00:24:35.750 22:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 2794982 00:24:35.750 22:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:24:35.750 22:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:35.750 22:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2794982 00:24:35.750 22:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:35.750 22:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:35.750 22:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2794982' 00:24:35.750 killing process with pid 2794982 00:24:35.750 22:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 2794982 00:24:35.750 22:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 2794982 00:24:36.010 22:12:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:36.010 22:12:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:36.010 22:12:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:36.010 22:12:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:36.010 22:12:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:36.010 22:12:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.010 22:12:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:36.010 22:12:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:37.917 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:37.917 00:24:37.917 real 0m19.184s 00:24:37.917 user 0m22.359s 00:24:37.917 sys 0m7.163s 00:24:37.917 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:37.917 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:37.917 ************************************ 00:24:37.917 END TEST nvmf_host_discovery 00:24:37.917 ************************************ 00:24:38.176 22:12:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:38.176 22:12:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:38.176 22:12:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:38.176 22:12:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.176 ************************************ 00:24:38.176 START TEST nvmf_host_multipath_status 00:24:38.176 ************************************ 00:24:38.176 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:38.176 * Looking for test storage... 00:24:38.176 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:38.176 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:38.176 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:38.176 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:24:38.177 22:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:44.750 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:44.750 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:44.750 Found net devices under 0000:af:00.0: cvl_0_0 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:44.750 Found net devices under 0000:af:00.1: cvl_0_1 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:44.750 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:44.751 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:44.751 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:44.751 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:44.751 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:44.751 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:44.751 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:44.751 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:44.751 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:44.751 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:45.010 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:45.010 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:45.010 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:45.010 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:45.010 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:24:45.010 00:24:45.010 --- 10.0.0.2 ping statistics --- 00:24:45.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.010 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:24:45.010 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:45.010 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:45.010 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:24:45.010 00:24:45.010 --- 10.0.0.1 ping statistics --- 00:24:45.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.010 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:24:45.010 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:45.010 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:24:45.010 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:45.010 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:45.010 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:45.010 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:45.010 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:45.010 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:45.010 22:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:45.010 22:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:45.010 22:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:45.010 22:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:45.010 22:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:45.010 22:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2800669 00:24:45.010 22:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2800669 00:24:45.010 22:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 2800669 ']' 00:24:45.010 22:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:45.010 22:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:45.010 22:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:45.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:45.010 22:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:45.010 22:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:45.010 22:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:45.010 [2024-07-24 22:12:24.080185] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:24:45.010 [2024-07-24 22:12:24.080235] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:45.010 EAL: No free 2048 kB hugepages reported on node 1 00:24:45.010 [2024-07-24 22:12:24.152360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:45.269 [2024-07-24 22:12:24.224558] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:45.269 [2024-07-24 22:12:24.224596] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:45.269 [2024-07-24 22:12:24.224606] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:45.269 [2024-07-24 22:12:24.224615] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:45.269 [2024-07-24 22:12:24.224624] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:45.269 [2024-07-24 22:12:24.224666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:45.270 [2024-07-24 22:12:24.224670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:45.838 22:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:45.838 22:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:24:45.838 22:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:45.838 22:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:45.838 22:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:45.838 22:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:45.838 22:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2800669 00:24:45.838 22:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:46.096 [2024-07-24 22:12:25.079541] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:46.096 22:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:46.096 Malloc0 00:24:46.096 22:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:46.356 22:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:46.615 22:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:46.615 [2024-07-24 22:12:25.802413] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:46.615 22:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:46.874 [2024-07-24 22:12:25.970898] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:46.874 22:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:46.874 22:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2801019 00:24:46.874 22:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:46.874 22:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2801019 /var/tmp/bdevperf.sock 00:24:46.874 22:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 2801019 ']' 00:24:46.874 22:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:46.874 22:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:46.874 22:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:46.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:46.874 22:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:46.874 22:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:47.873 22:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:47.873 22:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:24:47.873 22:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:47.873 22:12:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:24:48.442 Nvme0n1 00:24:48.442 22:12:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:49.010 Nvme0n1 00:24:49.010 22:12:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:49.010 22:12:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:50.915 22:12:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:50.915 22:12:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:51.175 22:12:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:51.175 22:12:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:52.553 22:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:52.553 22:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:52.553 22:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.553 22:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:52.553 22:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:52.553 22:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:52.553 22:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.553 22:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:52.553 22:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:52.553 22:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:52.553 22:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.553 22:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:52.813 22:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:52.813 22:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:52.813 22:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.813 22:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:53.072 22:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:53.072 22:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:53.072 22:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:53.072 22:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:53.072 22:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:53.072 22:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:53.072 22:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:53.072 22:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:53.331 22:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:53.331 22:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:53.331 22:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:53.590 22:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:53.848 22:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:54.785 22:12:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:54.785 22:12:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:54.785 22:12:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.785 22:12:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:55.044 22:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:55.044 22:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:55.044 22:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:55.044 22:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:55.044 22:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:55.044 22:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:55.044 22:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:55.044 22:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:55.302 22:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:55.302 22:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:55.302 22:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:55.302 22:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:55.562 22:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:55.562 22:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:55.562 22:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:55.562 22:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:55.821 22:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:55.821 22:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:55.821 22:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:55.821 22:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:55.821 22:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:55.821 22:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:55.821 22:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:56.079 22:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:56.338 22:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:57.273 22:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:57.273 22:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:57.273 22:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.273 22:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:57.530 22:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.530 22:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:57.530 22:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.531 22:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:57.531 22:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:57.531 22:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:57.531 22:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.531 22:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:57.789 22:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.789 22:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:57.789 22:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:57.789 22:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:58.048 22:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:58.048 22:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:58.048 22:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:58.048 22:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:58.048 22:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:58.048 22:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:58.048 22:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:58.315 22:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:58.315 22:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:58.315 22:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:58.315 22:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:58.575 22:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:58.832 22:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:59.769 22:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:59.769 22:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:59.769 22:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.769 22:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:00.028 22:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:00.028 22:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:00.028 22:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:00.028 22:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:00.028 22:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:00.028 22:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:00.028 22:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:00.028 22:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:00.287 22:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:00.287 22:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:00.287 22:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:00.287 22:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:00.545 22:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:00.545 22:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:00.545 22:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:00.545 22:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:00.804 22:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:00.804 22:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:00.804 22:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:00.804 22:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:00.804 22:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:00.804 22:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:00.804 22:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:01.063 22:12:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:01.323 22:12:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:02.299 22:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:02.299 22:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:02.299 22:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.299 22:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:02.558 22:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:02.558 22:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:02.558 22:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.558 22:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:02.558 22:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:02.558 22:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:02.558 22:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.558 22:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:02.817 22:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:02.817 22:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:02.817 22:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:02.817 22:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:03.077 22:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:03.077 22:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:03.077 22:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:03.077 22:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:03.077 22:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:03.077 22:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:03.077 22:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:03.077 22:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:03.336 22:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:03.336 22:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:03.336 22:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:03.596 22:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:03.596 22:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:04.533 22:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:04.533 22:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:04.533 22:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:04.792 22:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:04.792 22:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:04.792 22:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:04.792 22:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:04.792 22:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.051 22:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.051 22:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:05.051 22:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.051 22:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:05.310 22:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.310 22:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:05.310 22:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.310 22:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:05.310 22:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.310 22:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:05.310 22:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.310 22:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:05.569 22:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:05.569 22:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:05.569 22:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.569 22:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:05.828 22:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.828 22:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:05.828 22:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:05.828 22:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:06.086 22:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:06.346 22:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:07.284 22:12:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:07.284 22:12:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:07.284 22:12:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:07.284 22:12:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:07.543 22:12:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:07.544 22:12:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:07.544 22:12:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:07.544 22:12:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:07.803 22:12:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:07.803 22:12:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:07.803 22:12:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:07.803 22:12:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:07.803 22:12:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:07.803 22:12:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:07.803 22:12:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:07.803 22:12:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:08.062 22:12:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:08.062 22:12:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:08.062 22:12:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.062 22:12:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:08.321 22:12:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:08.321 22:12:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:08.321 22:12:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:08.321 22:12:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.321 22:12:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:08.321 22:12:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:08.321 22:12:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:08.581 22:12:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:08.840 22:12:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:09.778 22:12:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:09.778 22:12:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:09.778 22:12:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:09.778 22:12:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:10.037 22:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:10.037 22:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:10.037 22:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.037 22:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:10.037 22:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:10.037 22:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:10.037 22:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.037 22:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:10.297 22:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:10.297 22:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:10.297 22:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.297 22:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:10.556 22:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:10.556 22:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:10.556 22:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.556 22:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:10.816 22:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:10.816 22:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:10.816 22:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.816 22:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:10.816 22:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:10.816 22:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:10.816 22:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:11.075 22:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:11.335 22:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:12.273 22:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:12.273 22:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:12.273 22:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:12.273 22:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:12.532 22:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:12.532 22:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:12.532 22:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:12.532 22:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:12.532 22:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:12.532 22:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:12.532 22:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:12.532 22:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:12.793 22:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:12.793 22:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:12.793 22:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:12.793 22:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:13.051 22:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:13.051 22:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:13.051 22:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.051 22:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:13.051 22:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:13.051 22:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:13.051 22:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.051 22:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:13.310 22:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:13.310 22:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:13.310 22:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:13.569 22:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:13.828 22:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:14.764 22:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:14.764 22:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:14.764 22:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:14.764 22:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:15.023 22:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:15.023 22:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:15.023 22:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.023 22:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:15.023 22:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:15.023 22:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:15.023 22:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.023 22:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:15.282 22:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:15.282 22:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:15.282 22:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.282 22:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:15.583 22:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:15.583 22:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:15.583 22:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.583 22:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:15.583 22:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:15.583 22:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:15.583 22:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:15.583 22:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.875 22:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:15.875 22:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2801019 00:25:15.875 22:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 2801019 ']' 00:25:15.875 22:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 2801019 00:25:15.875 22:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:25:15.875 22:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:15.875 22:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2801019 00:25:15.875 22:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:25:15.875 22:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:25:15.875 22:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2801019' 00:25:15.875 killing process with pid 2801019 00:25:15.875 22:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 2801019 00:25:15.875 22:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 2801019 00:25:15.875 Connection closed with partial response: 00:25:15.875 00:25:15.875 00:25:16.138 22:12:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2801019 00:25:16.138 22:12:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:16.138 [2024-07-24 22:12:26.033142] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:25:16.138 [2024-07-24 22:12:26.033196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2801019 ] 00:25:16.138 EAL: No free 2048 kB hugepages reported on node 1 00:25:16.138 [2024-07-24 22:12:26.098528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.138 [2024-07-24 22:12:26.172044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:16.138 Running I/O for 90 seconds... 00:25:16.138 [2024-07-24 22:12:40.129076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:108696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.138 [2024-07-24 22:12:40.129120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:16.138 [2024-07-24 22:12:40.129160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:108704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.138 [2024-07-24 22:12:40.129172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:16.138 [2024-07-24 22:12:40.129188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:108712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.138 [2024-07-24 22:12:40.129198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:16.138 [2024-07-24 22:12:40.129213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:108720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.138 [2024-07-24 22:12:40.129223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:16.138 [2024-07-24 22:12:40.129238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:108728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.138 [2024-07-24 22:12:40.129248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:16.138 [2024-07-24 22:12:40.129263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:108736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.138 [2024-07-24 22:12:40.129272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:16.138 [2024-07-24 22:12:40.129287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:108744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.138 [2024-07-24 22:12:40.129296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.138 [2024-07-24 22:12:40.129310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:108752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.138 [2024-07-24 22:12:40.129320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:16.138 [2024-07-24 22:12:40.129359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:108760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.138 [2024-07-24 22:12:40.129370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:16.138 [2024-07-24 22:12:40.129905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:108768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.138 [2024-07-24 22:12:40.129926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:16.138 [2024-07-24 22:12:40.129944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:108776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.138 [2024-07-24 22:12:40.129959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:16.138 [2024-07-24 22:12:40.129974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:108784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.138 [2024-07-24 22:12:40.129984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:16.138 [2024-07-24 22:12:40.129999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:108792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.138 [2024-07-24 22:12:40.130009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:16.138 [2024-07-24 22:12:40.130024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:108800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.138 [2024-07-24 22:12:40.130034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:16.138 [2024-07-24 22:12:40.130049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:108808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.138 [2024-07-24 22:12:40.130058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:16.138 [2024-07-24 22:12:40.130074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:108816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.138 [2024-07-24 22:12:40.130083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:16.138 [2024-07-24 22:12:40.130098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:108824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.138 [2024-07-24 22:12:40.130108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:16.138 [2024-07-24 22:12:40.130124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:108832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.138 [2024-07-24 22:12:40.130133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:16.138 [2024-07-24 22:12:40.130149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:108840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.138 [2024-07-24 22:12:40.130158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:16.138 [2024-07-24 22:12:40.130174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.138 [2024-07-24 22:12:40.130184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.138 [2024-07-24 22:12:40.130199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:108856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.138 [2024-07-24 22:12:40.130209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.138 [2024-07-24 22:12:40.130224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:108864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.138 [2024-07-24 22:12:40.130233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:16.138 [2024-07-24 22:12:40.130249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:108872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.138 [2024-07-24 22:12:40.130263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:16.138 [2024-07-24 22:12:40.130279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:108880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.138 [2024-07-24 22:12:40.130288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:16.138 [2024-07-24 22:12:40.130303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:108888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.138 [2024-07-24 22:12:40.130313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:16.138 [2024-07-24 22:12:40.130328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:108896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.138 [2024-07-24 22:12:40.130337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:16.138 [2024-07-24 22:12:40.130353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:108904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.138 [2024-07-24 22:12:40.130362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:16.138 [2024-07-24 22:12:40.130377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:108912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.138 [2024-07-24 22:12:40.130386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:16.138 [2024-07-24 22:12:40.130401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.138 [2024-07-24 22:12:40.130411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:16.138 [2024-07-24 22:12:40.130426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:108928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.138 [2024-07-24 22:12:40.130435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:16.138 [2024-07-24 22:12:40.130450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:108936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.138 [2024-07-24 22:12:40.130459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:16.138 [2024-07-24 22:12:40.130475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.138 [2024-07-24 22:12:40.130485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:16.138 [2024-07-24 22:12:40.130500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:108952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.138 [2024-07-24 22:12:40.130509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:16.138 [2024-07-24 22:12:40.130524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:108960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.138 [2024-07-24 22:12:40.130534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:16.139 [2024-07-24 22:12:40.130549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:108968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.139 [2024-07-24 22:12:40.130558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:16.139 [2024-07-24 22:12:40.130575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:108976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.139 [2024-07-24 22:12:40.130585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:16.139 [2024-07-24 22:12:40.130600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:108984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.139 [2024-07-24 22:12:40.130610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:16.139 [2024-07-24 22:12:40.130625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:108992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.139 [2024-07-24 22:12:40.130634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:16.139 [2024-07-24 22:12:40.130649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:109000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.139 [2024-07-24 22:12:40.130658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.139 [2024-07-24 22:12:40.130674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:109008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.139 [2024-07-24 22:12:40.130683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:16.139 [2024-07-24 22:12:40.130698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:109016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.139 [2024-07-24 22:12:40.130707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:16.139 [2024-07-24 22:12:40.130728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:109024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.139 [2024-07-24 22:12:40.130738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:16.139 [2024-07-24 22:12:40.130753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:109032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.139 [2024-07-24 22:12:40.130763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:16.139 [2024-07-24 22:12:40.130778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:109040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.139 [2024-07-24 22:12:40.130787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:16.139 [2024-07-24 22:12:40.130802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:109048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.139 [2024-07-24 22:12:40.130812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:16.139 [2024-07-24 22:12:40.130827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:109056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.139 [2024-07-24 22:12:40.130837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:16.139 [2024-07-24 22:12:40.130851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:109064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.139 [2024-07-24 22:12:40.130861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:16.139 [2024-07-24 22:12:40.130878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:109072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.139 [2024-07-24 22:12:40.130887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:16.139 [2024-07-24 22:12:40.130902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:109080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.139 [2024-07-24 22:12:40.130911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:16.139 [2024-07-24 22:12:40.130926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:109088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.139 [2024-07-24 22:12:40.130936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:16.139 [2024-07-24 22:12:40.130951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:109096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.139 [2024-07-24 22:12:40.130960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:16.139 [2024-07-24 22:12:40.130975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:109104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.139 [2024-07-24 22:12:40.130985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.139 [2024-07-24 22:12:40.131000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:109112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.139 [2024-07-24 22:12:40.131009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.139 [2024-07-24 22:12:40.131024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:108584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.139 [2024-07-24 22:12:40.131033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:16.139 [2024-07-24 22:12:40.131048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:108592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.139 [2024-07-24 22:12:40.131058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:16.139 [2024-07-24 22:12:40.131073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:108600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.139 [2024-07-24 22:12:40.131082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:16.139 [2024-07-24 22:12:40.131097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:108608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.139 [2024-07-24 22:12:40.131107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:16.139 [2024-07-24 22:12:40.131122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.139 [2024-07-24 22:12:40.131131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:16.139 [2024-07-24 22:12:40.131147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:108624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.139 [2024-07-24 22:12:40.131156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:16.139 [2024-07-24 22:12:40.131256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:108632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.139 [2024-07-24 22:12:40.131269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:16.139 [2024-07-24 22:12:40.131287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:109120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.139 [2024-07-24 22:12:40.131297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:16.139 [2024-07-24 22:12:40.131315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:109128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.139 [2024-07-24 22:12:40.131325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:16.139 [2024-07-24 22:12:40.131343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:109136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.139 [2024-07-24 22:12:40.131352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:16.139 [2024-07-24 22:12:40.131371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:109144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.139 [2024-07-24 22:12:40.131380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:16.139 [2024-07-24 22:12:40.131398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:109152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.139 [2024-07-24 22:12:40.131408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:16.139 [2024-07-24 22:12:40.131426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:109160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.139 [2024-07-24 22:12:40.131435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:16.139 [2024-07-24 22:12:40.131453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:109168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.139 [2024-07-24 22:12:40.131463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:16.139 [2024-07-24 22:12:40.131482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:109176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.139 [2024-07-24 22:12:40.131491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:16.139 [2024-07-24 22:12:40.131509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:109184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.139 [2024-07-24 22:12:40.131519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:16.139 [2024-07-24 22:12:40.131537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:109192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.139 [2024-07-24 22:12:40.131546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:16.139 [2024-07-24 22:12:40.131565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:109200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.139 [2024-07-24 22:12:40.131574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.139 [2024-07-24 22:12:40.131592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:109208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.139 [2024-07-24 22:12:40.131603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:16.140 [2024-07-24 22:12:40.131621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:109216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.140 [2024-07-24 22:12:40.131630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:16.140 [2024-07-24 22:12:40.131648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:109224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.140 [2024-07-24 22:12:40.131657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:16.140 [2024-07-24 22:12:40.131676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:109232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.140 [2024-07-24 22:12:40.131685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:16.140 [2024-07-24 22:12:40.131703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:109240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.140 [2024-07-24 22:12:40.131713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:16.140 [2024-07-24 22:12:40.131736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:109248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.140 [2024-07-24 22:12:40.131746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:16.140 [2024-07-24 22:12:40.131764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:109256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.140 [2024-07-24 22:12:40.131773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:16.140 [2024-07-24 22:12:40.131791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:109264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.140 [2024-07-24 22:12:40.131801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:16.140 [2024-07-24 22:12:40.131818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:109272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.140 [2024-07-24 22:12:40.131828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:16.140 [2024-07-24 22:12:40.131846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:109280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.140 [2024-07-24 22:12:40.131856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:16.140 [2024-07-24 22:12:40.131874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:109288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.140 [2024-07-24 22:12:40.131883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:16.140 [2024-07-24 22:12:40.131901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:109296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.140 [2024-07-24 22:12:40.131910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.140 [2024-07-24 22:12:40.131929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:109304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.140 [2024-07-24 22:12:40.131939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.140 [2024-07-24 22:12:40.131958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:109312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.140 [2024-07-24 22:12:40.131968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.140 [2024-07-24 22:12:40.131986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:109320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.140 [2024-07-24 22:12:40.131995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:16.140 [2024-07-24 22:12:40.132013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:109328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.140 [2024-07-24 22:12:40.132023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:16.140 [2024-07-24 22:12:40.132041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:109336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.140 [2024-07-24 22:12:40.132050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:16.140 [2024-07-24 22:12:40.132068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:109344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.140 [2024-07-24 22:12:40.132077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:16.140 [2024-07-24 22:12:40.132095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:109352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.140 [2024-07-24 22:12:40.132104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:16.140 [2024-07-24 22:12:40.132123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:109360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.140 [2024-07-24 22:12:40.132132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:16.140 [2024-07-24 22:12:40.132150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:109368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.140 [2024-07-24 22:12:40.132160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:16.140 [2024-07-24 22:12:40.132178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:109376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.140 [2024-07-24 22:12:40.132187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:16.140 [2024-07-24 22:12:40.132205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:109384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.140 [2024-07-24 22:12:40.132214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:16.140 [2024-07-24 22:12:40.132233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:109392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.140 [2024-07-24 22:12:40.132242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:16.140 [2024-07-24 22:12:40.132260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:109400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.140 [2024-07-24 22:12:40.132269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:16.140 [2024-07-24 22:12:40.132288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:109408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.140 [2024-07-24 22:12:40.132298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:16.140 [2024-07-24 22:12:40.132316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:109416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.140 [2024-07-24 22:12:40.132325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:16.140 [2024-07-24 22:12:40.132344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:109424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.140 [2024-07-24 22:12:40.132353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:16.140 [2024-07-24 22:12:40.132372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:109432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.140 [2024-07-24 22:12:40.132381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:16.140 [2024-07-24 22:12:40.132400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:109440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.140 [2024-07-24 22:12:40.132409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:16.140 [2024-07-24 22:12:40.132427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.140 [2024-07-24 22:12:40.132436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:16.140 [2024-07-24 22:12:40.132455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:109456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.140 [2024-07-24 22:12:40.132464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.140 [2024-07-24 22:12:40.132482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:109464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.140 [2024-07-24 22:12:40.132491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:16.140 [2024-07-24 22:12:40.132510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:109472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.140 [2024-07-24 22:12:40.132519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:16.140 [2024-07-24 22:12:40.132537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:109480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.140 [2024-07-24 22:12:40.132546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:16.140 [2024-07-24 22:12:40.132564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:109488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.140 [2024-07-24 22:12:40.132574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:16.140 [2024-07-24 22:12:40.132671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:109496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.140 [2024-07-24 22:12:40.132682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:16.140 [2024-07-24 22:12:40.132705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:109504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.140 [2024-07-24 22:12:40.132718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:16.140 [2024-07-24 22:12:40.132739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:109512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.140 [2024-07-24 22:12:40.132749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:16.140 [2024-07-24 22:12:40.132770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:109520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.140 [2024-07-24 22:12:40.132779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:16.141 [2024-07-24 22:12:40.132801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:109528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.141 [2024-07-24 22:12:40.132810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:16.141 [2024-07-24 22:12:40.132831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:109536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.141 [2024-07-24 22:12:40.132840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:16.141 [2024-07-24 22:12:40.132861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:109544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.141 [2024-07-24 22:12:40.132871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:16.141 [2024-07-24 22:12:40.132892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:109552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.141 [2024-07-24 22:12:40.132901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:16.141 [2024-07-24 22:12:40.132922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:109560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.141 [2024-07-24 22:12:40.132932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.141 [2024-07-24 22:12:40.132953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:109568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.141 [2024-07-24 22:12:40.132962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.141 [2024-07-24 22:12:40.132983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:109576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.141 [2024-07-24 22:12:40.132992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:16.141 [2024-07-24 22:12:40.133013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:109584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.141 [2024-07-24 22:12:40.133023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:16.141 [2024-07-24 22:12:40.133043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.141 [2024-07-24 22:12:40.133052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:16.141 [2024-07-24 22:12:40.133074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:108640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.141 [2024-07-24 22:12:40.133085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:16.141 [2024-07-24 22:12:40.133106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.141 [2024-07-24 22:12:40.133115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:16.141 [2024-07-24 22:12:40.133137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:108656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.141 [2024-07-24 22:12:40.133146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:16.141 [2024-07-24 22:12:40.133167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:108664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.141 [2024-07-24 22:12:40.133176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:16.141 [2024-07-24 22:12:40.133197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.141 [2024-07-24 22:12:40.133206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:16.141 [2024-07-24 22:12:40.133228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:108680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.141 [2024-07-24 22:12:40.133239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:16.141 [2024-07-24 22:12:40.133260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:108688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.141 [2024-07-24 22:12:40.133270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:16.141 [2024-07-24 22:12:52.797506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.141 [2024-07-24 22:12:52.797551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:16.141 [2024-07-24 22:12:52.797588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.141 [2024-07-24 22:12:52.797599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:16.141 [2024-07-24 22:12:52.797614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:64728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.141 [2024-07-24 22:12:52.797624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:16.141 [2024-07-24 22:12:52.797639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:64744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.141 [2024-07-24 22:12:52.797649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:16.141 [2024-07-24 22:12:52.797663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:64760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.141 [2024-07-24 22:12:52.797673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:16.141 [2024-07-24 22:12:52.797688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:64776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.141 [2024-07-24 22:12:52.797702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:16.141 [2024-07-24 22:12:52.797723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:64792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.141 [2024-07-24 22:12:52.797733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.141 [2024-07-24 22:12:52.797748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:64808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.141 [2024-07-24 22:12:52.797758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:16.141 [2024-07-24 22:12:52.797976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:64824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.141 [2024-07-24 22:12:52.797987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:16.141 [2024-07-24 22:12:52.798002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.141 [2024-07-24 22:12:52.798011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:16.141 [2024-07-24 22:12:52.798026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:64856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.141 [2024-07-24 22:12:52.798036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:16.141 [2024-07-24 22:12:52.798050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.141 [2024-07-24 22:12:52.798059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:16.141 [2024-07-24 22:12:52.798074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.141 [2024-07-24 22:12:52.798083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:16.141 [2024-07-24 22:12:52.798098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.141 [2024-07-24 22:12:52.798107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:16.141 [2024-07-24 22:12:52.798122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.141 [2024-07-24 22:12:52.798131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:16.141 [2024-07-24 22:12:52.798147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.141 [2024-07-24 22:12:52.798156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:16.141 [2024-07-24 22:12:52.798171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.141 [2024-07-24 22:12:52.798180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:16.141 [2024-07-24 22:12:52.798195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.141 [2024-07-24 22:12:52.798204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:16.141 [2024-07-24 22:12:52.798221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:64960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.141 [2024-07-24 22:12:52.798231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.141 [2024-07-24 22:12:52.798245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.141 [2024-07-24 22:12:52.798254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.141 [2024-07-24 22:12:52.798269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:64992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.141 [2024-07-24 22:12:52.798278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.141 [2024-07-24 22:12:52.798293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.141 [2024-07-24 22:12:52.798302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:16.142 [2024-07-24 22:12:52.798317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.142 [2024-07-24 22:12:52.798326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:16.142 [2024-07-24 22:12:52.798341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.142 [2024-07-24 22:12:52.798351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:16.142 [2024-07-24 22:12:52.798365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.142 [2024-07-24 22:12:52.798375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:16.142 [2024-07-24 22:12:52.798389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.142 [2024-07-24 22:12:52.798399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:16.142 [2024-07-24 22:12:52.798414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:64672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.142 [2024-07-24 22:12:52.798424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:16.142 [2024-07-24 22:12:52.800194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.142 [2024-07-24 22:12:52.800217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:16.142 [2024-07-24 22:12:52.800236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.142 [2024-07-24 22:12:52.800245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:16.142 [2024-07-24 22:12:52.800260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.142 [2024-07-24 22:12:52.800270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:16.142 [2024-07-24 22:12:52.800287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.142 [2024-07-24 22:12:52.800297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:16.142 [2024-07-24 22:12:52.800312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.142 [2024-07-24 22:12:52.800323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:16.142 [2024-07-24 22:12:52.800338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.142 [2024-07-24 22:12:52.800348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:16.142 [2024-07-24 22:12:52.800362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.142 [2024-07-24 22:12:52.800373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:16.142 [2024-07-24 22:12:52.800387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.142 [2024-07-24 22:12:52.800397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:16.142 [2024-07-24 22:12:52.800412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.142 [2024-07-24 22:12:52.800421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:16.142 [2024-07-24 22:12:52.800436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.142 [2024-07-24 22:12:52.800445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:16.142 [2024-07-24 22:12:52.800460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.142 [2024-07-24 22:12:52.800469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:16.142 [2024-07-24 22:12:52.800483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.142 [2024-07-24 22:12:52.800493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.142 [2024-07-24 22:12:52.800508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.142 [2024-07-24 22:12:52.800517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:16.142 [2024-07-24 22:12:52.800532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.142 [2024-07-24 22:12:52.800541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:16.142 [2024-07-24 22:12:52.800556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.142 [2024-07-24 22:12:52.800565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:16.142 [2024-07-24 22:12:52.800580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.142 [2024-07-24 22:12:52.800590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:16.142 [2024-07-24 22:12:52.800605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.142 [2024-07-24 22:12:52.800615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:16.142 Received shutdown signal, test time was about 26.867798 seconds 00:25:16.142 00:25:16.142 Latency(us) 00:25:16.142 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:16.142 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:16.142 Verification LBA range: start 0x0 length 0x4000 00:25:16.142 Nvme0n1 : 26.87 11044.80 43.14 0.00 0.00 11570.23 222.82 3019898.88 00:25:16.142 =================================================================================================================== 00:25:16.142 Total : 11044.80 43.14 0.00 0.00 11570.23 222.82 3019898.88 00:25:16.142 22:12:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:16.142 22:12:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:16.142 22:12:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:16.142 22:12:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:16.142 22:12:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:16.142 22:12:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:25:16.142 22:12:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:16.142 22:12:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:25:16.142 22:12:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:16.142 22:12:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:16.142 rmmod nvme_tcp 00:25:16.402 rmmod nvme_fabrics 00:25:16.402 rmmod nvme_keyring 00:25:16.402 22:12:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:16.402 22:12:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:25:16.402 22:12:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:25:16.402 22:12:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2800669 ']' 00:25:16.402 22:12:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2800669 00:25:16.403 22:12:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 2800669 ']' 00:25:16.403 22:12:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 2800669 00:25:16.403 22:12:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:25:16.403 22:12:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:16.403 22:12:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2800669 00:25:16.403 22:12:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:16.403 22:12:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:16.403 22:12:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2800669' 00:25:16.403 killing process with pid 2800669 00:25:16.403 22:12:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 2800669 00:25:16.403 22:12:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 2800669 00:25:16.662 22:12:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:16.662 22:12:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:16.662 22:12:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:16.662 22:12:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:16.662 22:12:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:16.662 22:12:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:16.662 22:12:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:16.662 22:12:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:18.568 22:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:18.568 00:25:18.568 real 0m40.540s 00:25:18.568 user 1m43.325s 00:25:18.568 sys 0m14.412s 00:25:18.568 22:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:18.568 22:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:18.568 ************************************ 00:25:18.568 END TEST nvmf_host_multipath_status 00:25:18.568 ************************************ 00:25:18.568 22:12:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:18.568 22:12:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:18.568 22:12:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:18.568 22:12:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.827 ************************************ 00:25:18.827 START TEST nvmf_discovery_remove_ifc 00:25:18.827 ************************************ 00:25:18.827 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:18.827 * Looking for test storage... 00:25:18.827 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:18.827 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:18.827 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:18.827 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:18.827 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:18.827 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:18.827 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:18.828 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:18.828 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:18.828 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:18.828 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:18.828 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:18.828 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:18.828 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:25:18.828 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:25:18.828 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:18.828 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:18.828 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:18.828 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:18.828 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:18.828 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:18.828 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:18.828 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:18.828 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.828 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.828 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.828 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:18.828 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.828 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:25:18.828 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:18.828 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:18.828 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:18.828 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:18.828 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:18.828 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:18.828 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:18.828 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:18.828 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:18.828 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:18.828 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:18.828 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:18.828 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:18.828 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:18.828 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:18.828 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:18.828 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:18.828 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:18.828 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:18.828 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:18.828 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.828 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:18.828 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:18.828 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:18.828 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:18.828 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:25:18.828 22:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:25.401 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:25.401 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:25.401 Found net devices under 0000:af:00.0: cvl_0_0 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:25.401 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:25.402 Found net devices under 0000:af:00.1: cvl_0_1 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:25.402 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:25.402 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:25:25.402 00:25:25.402 --- 10.0.0.2 ping statistics --- 00:25:25.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:25.402 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:25.402 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:25.402 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:25:25.402 00:25:25.402 --- 10.0.0.1 ping statistics --- 00:25:25.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:25.402 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=2809640 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 2809640 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 2809640 ']' 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:25.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:25.402 22:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:25.660 [2024-07-24 22:13:04.651291] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:25:25.660 [2024-07-24 22:13:04.651341] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:25.660 EAL: No free 2048 kB hugepages reported on node 1 00:25:25.660 [2024-07-24 22:13:04.723292] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.660 [2024-07-24 22:13:04.795304] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:25.660 [2024-07-24 22:13:04.795341] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:25.660 [2024-07-24 22:13:04.795350] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:25.660 [2024-07-24 22:13:04.795359] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:25.660 [2024-07-24 22:13:04.795366] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:25.660 [2024-07-24 22:13:04.795392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:26.595 22:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:26.595 22:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:25:26.595 22:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:26.595 22:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:26.595 22:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:26.595 22:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:26.595 22:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:26.595 22:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.595 22:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:26.595 [2024-07-24 22:13:05.508261] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:26.595 [2024-07-24 22:13:05.516396] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:26.595 null0 00:25:26.595 [2024-07-24 22:13:05.548420] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:26.595 22:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.595 22:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2809915 00:25:26.595 22:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:26.595 22:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2809915 /tmp/host.sock 00:25:26.595 22:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 2809915 ']' 00:25:26.595 22:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:25:26.595 22:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:26.595 22:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:26.595 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:26.595 22:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:26.595 22:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:26.595 [2024-07-24 22:13:05.618645] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:25:26.595 [2024-07-24 22:13:05.618691] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2809915 ] 00:25:26.595 EAL: No free 2048 kB hugepages reported on node 1 00:25:26.595 [2024-07-24 22:13:05.686965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:26.595 [2024-07-24 22:13:05.755790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:27.531 22:13:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:27.531 22:13:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:25:27.531 22:13:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:27.531 22:13:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:27.531 22:13:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.531 22:13:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:27.531 22:13:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.531 22:13:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:27.531 22:13:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.531 22:13:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:27.531 22:13:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.531 22:13:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:27.531 22:13:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.531 22:13:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:28.468 [2024-07-24 22:13:07.556940] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:28.468 [2024-07-24 22:13:07.556961] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:28.468 [2024-07-24 22:13:07.556975] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:28.468 [2024-07-24 22:13:07.645247] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:28.728 [2024-07-24 22:13:07.708388] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:28.728 [2024-07-24 22:13:07.708433] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:28.728 [2024-07-24 22:13:07.708454] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:28.728 [2024-07-24 22:13:07.708468] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:28.728 [2024-07-24 22:13:07.708487] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:28.728 22:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.728 22:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:28.728 22:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:28.728 22:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:28.728 22:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:28.728 22:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.728 [2024-07-24 22:13:07.715583] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x192cd40 was disconnected and freed. delete nvme_qpair. 00:25:28.728 22:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:28.728 22:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:28.728 22:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:28.728 22:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.728 22:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:28.728 22:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:25:28.728 22:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:25:28.728 22:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:28.728 22:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:28.728 22:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:28.728 22:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:28.728 22:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.728 22:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:28.728 22:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:28.728 22:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:28.728 22:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.987 22:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:28.987 22:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:29.924 22:13:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:29.924 22:13:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:29.924 22:13:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:29.924 22:13:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:29.924 22:13:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.924 22:13:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:29.924 22:13:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:29.924 22:13:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.924 22:13:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:29.924 22:13:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:30.860 22:13:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:30.860 22:13:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:30.860 22:13:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:30.860 22:13:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.860 22:13:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:30.860 22:13:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:30.860 22:13:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:30.860 22:13:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.860 22:13:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:30.860 22:13:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:32.253 22:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:32.253 22:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:32.253 22:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:32.253 22:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:32.253 22:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.253 22:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:32.253 22:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:32.253 22:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.253 22:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:32.253 22:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:33.192 22:13:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:33.192 22:13:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:33.192 22:13:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:33.192 22:13:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.192 22:13:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:33.192 22:13:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:33.192 22:13:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:33.192 22:13:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.192 22:13:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:33.192 22:13:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:34.131 [2024-07-24 22:13:13.149502] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:34.131 [2024-07-24 22:13:13.149542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.131 [2024-07-24 22:13:13.149555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.131 [2024-07-24 22:13:13.149565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.131 [2024-07-24 22:13:13.149575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.131 [2024-07-24 22:13:13.149584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.131 [2024-07-24 22:13:13.149594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.131 [2024-07-24 22:13:13.149607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.131 [2024-07-24 22:13:13.149616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.131 [2024-07-24 22:13:13.149626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.131 [2024-07-24 22:13:13.149635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.131 [2024-07-24 22:13:13.149644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3740 is same with the state(5) to be set 00:25:34.131 [2024-07-24 22:13:13.159524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f3740 (9): Bad file descriptor 00:25:34.131 22:13:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:34.131 22:13:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:34.131 22:13:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:34.131 22:13:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:34.131 22:13:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.131 22:13:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:34.131 22:13:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:34.131 [2024-07-24 22:13:13.169561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:35.113 [2024-07-24 22:13:14.199784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:35.113 [2024-07-24 22:13:14.199831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f3740 with addr=10.0.0.2, port=4420 00:25:35.113 [2024-07-24 22:13:14.199850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3740 is same with the state(5) to be set 00:25:35.113 [2024-07-24 22:13:14.199880] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f3740 (9): Bad file descriptor 00:25:35.113 [2024-07-24 22:13:14.200272] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:35.113 [2024-07-24 22:13:14.200302] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:35.113 [2024-07-24 22:13:14.200316] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:35.113 [2024-07-24 22:13:14.200330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:35.113 [2024-07-24 22:13:14.200351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.113 [2024-07-24 22:13:14.200363] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:35.113 22:13:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.113 22:13:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:35.113 22:13:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:36.051 [2024-07-24 22:13:15.202829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:36.051 [2024-07-24 22:13:15.202850] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:36.051 [2024-07-24 22:13:15.202859] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:36.051 [2024-07-24 22:13:15.202868] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:25:36.051 [2024-07-24 22:13:15.202900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.051 [2024-07-24 22:13:15.202919] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:36.051 [2024-07-24 22:13:15.202939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:36.051 [2024-07-24 22:13:15.202950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.051 [2024-07-24 22:13:15.202961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:36.051 [2024-07-24 22:13:15.202970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.051 [2024-07-24 22:13:15.202979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:36.051 [2024-07-24 22:13:15.202988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.051 [2024-07-24 22:13:15.202997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:36.051 [2024-07-24 22:13:15.203006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.051 [2024-07-24 22:13:15.203015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:36.051 [2024-07-24 22:13:15.203024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.051 [2024-07-24 22:13:15.203033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:25:36.051 [2024-07-24 22:13:15.203066] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f2ba0 (9): Bad file descriptor 00:25:36.051 [2024-07-24 22:13:15.204068] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:36.051 [2024-07-24 22:13:15.204079] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:25:36.051 22:13:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:36.051 22:13:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:36.051 22:13:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:36.051 22:13:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.051 22:13:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:36.051 22:13:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:36.051 22:13:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:36.051 22:13:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.310 22:13:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:36.310 22:13:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:36.310 22:13:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:36.310 22:13:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:36.310 22:13:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:36.310 22:13:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:36.310 22:13:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:36.310 22:13:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.310 22:13:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:36.310 22:13:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:36.310 22:13:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:36.310 22:13:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.310 22:13:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:36.310 22:13:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:37.248 22:13:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:37.248 22:13:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:37.248 22:13:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:37.248 22:13:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:37.248 22:13:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.248 22:13:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:37.248 22:13:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:37.248 22:13:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.507 22:13:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:37.507 22:13:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:38.079 [2024-07-24 22:13:17.262863] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:38.079 [2024-07-24 22:13:17.262880] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:38.079 [2024-07-24 22:13:17.262896] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:38.338 [2024-07-24 22:13:17.349166] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:38.338 [2024-07-24 22:13:17.453649] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:38.338 [2024-07-24 22:13:17.453682] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:38.338 [2024-07-24 22:13:17.453700] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:38.338 [2024-07-24 22:13:17.453713] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:38.338 [2024-07-24 22:13:17.453728] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:38.338 [2024-07-24 22:13:17.460645] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x18e2110 was disconnected and freed. delete nvme_qpair. 00:25:38.338 22:13:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:38.338 22:13:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:38.338 22:13:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:38.338 22:13:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.338 22:13:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:38.338 22:13:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:38.338 22:13:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:38.338 22:13:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.338 22:13:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:38.338 22:13:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:38.338 22:13:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2809915 00:25:38.338 22:13:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 2809915 ']' 00:25:38.339 22:13:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 2809915 00:25:38.339 22:13:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:25:38.339 22:13:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:38.598 22:13:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2809915 00:25:38.598 22:13:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:38.598 22:13:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:38.598 22:13:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2809915' 00:25:38.598 killing process with pid 2809915 00:25:38.598 22:13:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 2809915 00:25:38.598 22:13:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 2809915 00:25:38.598 22:13:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:38.598 22:13:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:38.598 22:13:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:25:38.598 22:13:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:38.598 22:13:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:25:38.598 22:13:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:38.598 22:13:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:38.598 rmmod nvme_tcp 00:25:38.598 rmmod nvme_fabrics 00:25:38.857 rmmod nvme_keyring 00:25:38.857 22:13:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:38.857 22:13:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:25:38.857 22:13:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:25:38.857 22:13:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 2809640 ']' 00:25:38.857 22:13:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 2809640 00:25:38.857 22:13:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 2809640 ']' 00:25:38.857 22:13:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 2809640 00:25:38.857 22:13:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:25:38.857 22:13:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:38.857 22:13:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2809640 00:25:38.857 22:13:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:38.857 22:13:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:38.857 22:13:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2809640' 00:25:38.857 killing process with pid 2809640 00:25:38.857 22:13:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 2809640 00:25:38.857 22:13:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 2809640 00:25:39.116 22:13:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:39.116 22:13:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:39.116 22:13:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:39.116 22:13:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:39.116 22:13:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:39.116 22:13:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:39.116 22:13:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:39.116 22:13:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:41.023 22:13:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:41.023 00:25:41.023 real 0m22.352s 00:25:41.023 user 0m26.296s 00:25:41.023 sys 0m7.117s 00:25:41.023 22:13:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:41.023 22:13:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:41.023 ************************************ 00:25:41.023 END TEST nvmf_discovery_remove_ifc 00:25:41.023 ************************************ 00:25:41.023 22:13:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:41.023 22:13:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:41.023 22:13:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:41.024 22:13:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.283 ************************************ 00:25:41.283 START TEST nvmf_identify_kernel_target 00:25:41.283 ************************************ 00:25:41.283 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:41.283 * Looking for test storage... 00:25:41.283 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:41.283 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:41.283 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:41.283 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:41.283 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:41.283 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:41.283 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:41.283 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:41.283 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:41.283 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:41.283 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:41.283 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:41.283 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:41.283 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:25:41.283 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:25:41.283 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:41.283 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:41.283 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:41.283 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:41.283 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:41.283 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:41.283 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:41.283 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:41.283 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.283 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.283 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.283 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:41.283 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.283 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:25:41.283 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:41.283 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:41.284 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:41.284 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:41.284 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:41.284 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:41.284 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:41.284 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:41.284 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:41.284 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:41.284 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:41.284 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:41.284 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:41.284 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:41.284 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:41.284 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:41.284 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:41.284 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:41.284 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:41.284 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:25:41.284 22:13:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:47.856 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:47.856 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:25:47.856 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:47.856 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:47.856 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:47.856 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:47.856 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:47.856 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:47.857 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:47.857 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:47.857 Found net devices under 0000:af:00.0: cvl_0_0 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:47.857 Found net devices under 0000:af:00.1: cvl_0_1 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:47.857 22:13:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:47.857 22:13:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:47.857 22:13:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:47.857 22:13:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:47.857 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:47.857 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:25:47.857 00:25:47.857 --- 10.0.0.2 ping statistics --- 00:25:47.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:47.857 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:25:47.857 22:13:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:47.857 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:47.857 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:25:47.857 00:25:47.857 --- 10.0.0.1 ping statistics --- 00:25:47.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:47.857 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:25:47.857 22:13:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:47.857 22:13:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:25:47.857 22:13:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:47.857 22:13:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:47.857 22:13:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:47.857 22:13:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:47.857 22:13:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:47.857 22:13:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:47.857 22:13:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:48.117 22:13:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:48.117 22:13:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:48.117 22:13:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:25:48.117 22:13:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:48.117 22:13:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:48.117 22:13:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.117 22:13:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.117 22:13:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:48.117 22:13:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.117 22:13:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:48.117 22:13:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:48.117 22:13:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:48.117 22:13:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:48.117 22:13:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:48.117 22:13:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:48.117 22:13:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:48.117 22:13:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:48.117 22:13:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:48.117 22:13:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:48.117 22:13:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:25:48.117 22:13:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:48.117 22:13:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:48.117 22:13:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:48.117 22:13:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:51.407 Waiting for block devices as requested 00:25:51.407 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:51.407 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:51.407 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:51.407 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:51.407 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:51.407 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:51.407 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:51.665 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:51.665 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:51.665 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:51.923 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:51.923 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:51.923 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:52.182 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:52.182 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:52.182 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:52.441 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:25:52.441 22:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:52.441 22:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:52.441 22:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:52.441 22:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:25:52.441 22:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:52.441 22:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:52.441 22:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:52.441 22:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:52.441 22:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:52.441 No valid GPT data, bailing 00:25:52.441 22:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:52.441 22:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:25:52.441 22:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:25:52.441 22:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:52.441 22:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:25:52.441 22:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:52.441 22:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:52.441 22:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:52.441 22:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:52.441 22:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:25:52.441 22:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:25:52.441 22:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:25:52.441 22:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:52.441 22:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:25:52.441 22:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:25:52.441 22:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:25:52.441 22:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:52.441 22:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.1 -t tcp -s 4420 00:25:52.702 00:25:52.702 Discovery Log Number of Records 2, Generation counter 2 00:25:52.702 =====Discovery Log Entry 0====== 00:25:52.702 trtype: tcp 00:25:52.702 adrfam: ipv4 00:25:52.702 subtype: current discovery subsystem 00:25:52.702 treq: not specified, sq flow control disable supported 00:25:52.702 portid: 1 00:25:52.702 trsvcid: 4420 00:25:52.702 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:52.702 traddr: 10.0.0.1 00:25:52.702 eflags: none 00:25:52.702 sectype: none 00:25:52.702 =====Discovery Log Entry 1====== 00:25:52.702 trtype: tcp 00:25:52.702 adrfam: ipv4 00:25:52.702 subtype: nvme subsystem 00:25:52.702 treq: not specified, sq flow control disable supported 00:25:52.702 portid: 1 00:25:52.702 trsvcid: 4420 00:25:52.702 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:52.702 traddr: 10.0.0.1 00:25:52.702 eflags: none 00:25:52.702 sectype: none 00:25:52.702 22:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:52.702 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:52.702 EAL: No free 2048 kB hugepages reported on node 1 00:25:52.702 ===================================================== 00:25:52.702 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:52.702 ===================================================== 00:25:52.702 Controller Capabilities/Features 00:25:52.702 ================================ 00:25:52.702 Vendor ID: 0000 00:25:52.702 Subsystem Vendor ID: 0000 00:25:52.702 Serial Number: 6435165faf794da21765 00:25:52.702 Model Number: Linux 00:25:52.702 Firmware Version: 6.7.0-68 00:25:52.702 Recommended Arb Burst: 0 00:25:52.702 IEEE OUI Identifier: 00 00 00 00:25:52.702 Multi-path I/O 00:25:52.702 May have multiple subsystem ports: No 00:25:52.702 May have multiple controllers: No 00:25:52.702 Associated with SR-IOV VF: No 00:25:52.702 Max Data Transfer Size: Unlimited 00:25:52.702 Max Number of Namespaces: 0 00:25:52.702 Max Number of I/O Queues: 1024 00:25:52.702 NVMe Specification Version (VS): 1.3 00:25:52.702 NVMe Specification Version (Identify): 1.3 00:25:52.702 Maximum Queue Entries: 1024 00:25:52.702 Contiguous Queues Required: No 00:25:52.702 Arbitration Mechanisms Supported 00:25:52.702 Weighted Round Robin: Not Supported 00:25:52.702 Vendor Specific: Not Supported 00:25:52.702 Reset Timeout: 7500 ms 00:25:52.702 Doorbell Stride: 4 bytes 00:25:52.702 NVM Subsystem Reset: Not Supported 00:25:52.702 Command Sets Supported 00:25:52.702 NVM Command Set: Supported 00:25:52.702 Boot Partition: Not Supported 00:25:52.702 Memory Page Size Minimum: 4096 bytes 00:25:52.702 Memory Page Size Maximum: 4096 bytes 00:25:52.702 Persistent Memory Region: Not Supported 00:25:52.702 Optional Asynchronous Events Supported 00:25:52.702 Namespace Attribute Notices: Not Supported 00:25:52.702 Firmware Activation Notices: Not Supported 00:25:52.702 ANA Change Notices: Not Supported 00:25:52.702 PLE Aggregate Log Change Notices: Not Supported 00:25:52.702 LBA Status Info Alert Notices: Not Supported 00:25:52.702 EGE Aggregate Log Change Notices: Not Supported 00:25:52.702 Normal NVM Subsystem Shutdown event: Not Supported 00:25:52.702 Zone Descriptor Change Notices: Not Supported 00:25:52.702 Discovery Log Change Notices: Supported 00:25:52.702 Controller Attributes 00:25:52.702 128-bit Host Identifier: Not Supported 00:25:52.702 Non-Operational Permissive Mode: Not Supported 00:25:52.702 NVM Sets: Not Supported 00:25:52.702 Read Recovery Levels: Not Supported 00:25:52.702 Endurance Groups: Not Supported 00:25:52.702 Predictable Latency Mode: Not Supported 00:25:52.702 Traffic Based Keep ALive: Not Supported 00:25:52.702 Namespace Granularity: Not Supported 00:25:52.702 SQ Associations: Not Supported 00:25:52.702 UUID List: Not Supported 00:25:52.702 Multi-Domain Subsystem: Not Supported 00:25:52.702 Fixed Capacity Management: Not Supported 00:25:52.702 Variable Capacity Management: Not Supported 00:25:52.702 Delete Endurance Group: Not Supported 00:25:52.702 Delete NVM Set: Not Supported 00:25:52.702 Extended LBA Formats Supported: Not Supported 00:25:52.702 Flexible Data Placement Supported: Not Supported 00:25:52.702 00:25:52.702 Controller Memory Buffer Support 00:25:52.702 ================================ 00:25:52.702 Supported: No 00:25:52.702 00:25:52.702 Persistent Memory Region Support 00:25:52.702 ================================ 00:25:52.702 Supported: No 00:25:52.702 00:25:52.702 Admin Command Set Attributes 00:25:52.702 ============================ 00:25:52.702 Security Send/Receive: Not Supported 00:25:52.702 Format NVM: Not Supported 00:25:52.702 Firmware Activate/Download: Not Supported 00:25:52.702 Namespace Management: Not Supported 00:25:52.702 Device Self-Test: Not Supported 00:25:52.702 Directives: Not Supported 00:25:52.702 NVMe-MI: Not Supported 00:25:52.702 Virtualization Management: Not Supported 00:25:52.702 Doorbell Buffer Config: Not Supported 00:25:52.702 Get LBA Status Capability: Not Supported 00:25:52.702 Command & Feature Lockdown Capability: Not Supported 00:25:52.702 Abort Command Limit: 1 00:25:52.702 Async Event Request Limit: 1 00:25:52.702 Number of Firmware Slots: N/A 00:25:52.702 Firmware Slot 1 Read-Only: N/A 00:25:52.702 Firmware Activation Without Reset: N/A 00:25:52.702 Multiple Update Detection Support: N/A 00:25:52.702 Firmware Update Granularity: No Information Provided 00:25:52.702 Per-Namespace SMART Log: No 00:25:52.702 Asymmetric Namespace Access Log Page: Not Supported 00:25:52.702 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:52.702 Command Effects Log Page: Not Supported 00:25:52.702 Get Log Page Extended Data: Supported 00:25:52.702 Telemetry Log Pages: Not Supported 00:25:52.702 Persistent Event Log Pages: Not Supported 00:25:52.702 Supported Log Pages Log Page: May Support 00:25:52.702 Commands Supported & Effects Log Page: Not Supported 00:25:52.702 Feature Identifiers & Effects Log Page:May Support 00:25:52.702 NVMe-MI Commands & Effects Log Page: May Support 00:25:52.702 Data Area 4 for Telemetry Log: Not Supported 00:25:52.702 Error Log Page Entries Supported: 1 00:25:52.702 Keep Alive: Not Supported 00:25:52.702 00:25:52.702 NVM Command Set Attributes 00:25:52.702 ========================== 00:25:52.703 Submission Queue Entry Size 00:25:52.703 Max: 1 00:25:52.703 Min: 1 00:25:52.703 Completion Queue Entry Size 00:25:52.703 Max: 1 00:25:52.703 Min: 1 00:25:52.703 Number of Namespaces: 0 00:25:52.703 Compare Command: Not Supported 00:25:52.703 Write Uncorrectable Command: Not Supported 00:25:52.703 Dataset Management Command: Not Supported 00:25:52.703 Write Zeroes Command: Not Supported 00:25:52.703 Set Features Save Field: Not Supported 00:25:52.703 Reservations: Not Supported 00:25:52.703 Timestamp: Not Supported 00:25:52.703 Copy: Not Supported 00:25:52.703 Volatile Write Cache: Not Present 00:25:52.703 Atomic Write Unit (Normal): 1 00:25:52.703 Atomic Write Unit (PFail): 1 00:25:52.703 Atomic Compare & Write Unit: 1 00:25:52.703 Fused Compare & Write: Not Supported 00:25:52.703 Scatter-Gather List 00:25:52.703 SGL Command Set: Supported 00:25:52.703 SGL Keyed: Not Supported 00:25:52.703 SGL Bit Bucket Descriptor: Not Supported 00:25:52.703 SGL Metadata Pointer: Not Supported 00:25:52.703 Oversized SGL: Not Supported 00:25:52.703 SGL Metadata Address: Not Supported 00:25:52.703 SGL Offset: Supported 00:25:52.703 Transport SGL Data Block: Not Supported 00:25:52.703 Replay Protected Memory Block: Not Supported 00:25:52.703 00:25:52.703 Firmware Slot Information 00:25:52.703 ========================= 00:25:52.703 Active slot: 0 00:25:52.703 00:25:52.703 00:25:52.703 Error Log 00:25:52.703 ========= 00:25:52.703 00:25:52.703 Active Namespaces 00:25:52.703 ================= 00:25:52.703 Discovery Log Page 00:25:52.703 ================== 00:25:52.703 Generation Counter: 2 00:25:52.703 Number of Records: 2 00:25:52.703 Record Format: 0 00:25:52.703 00:25:52.703 Discovery Log Entry 0 00:25:52.703 ---------------------- 00:25:52.703 Transport Type: 3 (TCP) 00:25:52.703 Address Family: 1 (IPv4) 00:25:52.703 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:52.703 Entry Flags: 00:25:52.703 Duplicate Returned Information: 0 00:25:52.703 Explicit Persistent Connection Support for Discovery: 0 00:25:52.703 Transport Requirements: 00:25:52.703 Secure Channel: Not Specified 00:25:52.703 Port ID: 1 (0x0001) 00:25:52.703 Controller ID: 65535 (0xffff) 00:25:52.703 Admin Max SQ Size: 32 00:25:52.703 Transport Service Identifier: 4420 00:25:52.703 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:52.703 Transport Address: 10.0.0.1 00:25:52.703 Discovery Log Entry 1 00:25:52.703 ---------------------- 00:25:52.703 Transport Type: 3 (TCP) 00:25:52.703 Address Family: 1 (IPv4) 00:25:52.703 Subsystem Type: 2 (NVM Subsystem) 00:25:52.703 Entry Flags: 00:25:52.703 Duplicate Returned Information: 0 00:25:52.703 Explicit Persistent Connection Support for Discovery: 0 00:25:52.703 Transport Requirements: 00:25:52.703 Secure Channel: Not Specified 00:25:52.703 Port ID: 1 (0x0001) 00:25:52.703 Controller ID: 65535 (0xffff) 00:25:52.703 Admin Max SQ Size: 32 00:25:52.703 Transport Service Identifier: 4420 00:25:52.703 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:52.703 Transport Address: 10.0.0.1 00:25:52.703 22:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:52.703 EAL: No free 2048 kB hugepages reported on node 1 00:25:52.703 get_feature(0x01) failed 00:25:52.703 get_feature(0x02) failed 00:25:52.703 get_feature(0x04) failed 00:25:52.703 ===================================================== 00:25:52.703 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:52.703 ===================================================== 00:25:52.703 Controller Capabilities/Features 00:25:52.703 ================================ 00:25:52.703 Vendor ID: 0000 00:25:52.703 Subsystem Vendor ID: 0000 00:25:52.703 Serial Number: 4f1bdca9df06cd487001 00:25:52.703 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:52.703 Firmware Version: 6.7.0-68 00:25:52.703 Recommended Arb Burst: 6 00:25:52.703 IEEE OUI Identifier: 00 00 00 00:25:52.703 Multi-path I/O 00:25:52.703 May have multiple subsystem ports: Yes 00:25:52.703 May have multiple controllers: Yes 00:25:52.703 Associated with SR-IOV VF: No 00:25:52.703 Max Data Transfer Size: Unlimited 00:25:52.703 Max Number of Namespaces: 1024 00:25:52.703 Max Number of I/O Queues: 128 00:25:52.703 NVMe Specification Version (VS): 1.3 00:25:52.703 NVMe Specification Version (Identify): 1.3 00:25:52.703 Maximum Queue Entries: 1024 00:25:52.703 Contiguous Queues Required: No 00:25:52.703 Arbitration Mechanisms Supported 00:25:52.703 Weighted Round Robin: Not Supported 00:25:52.703 Vendor Specific: Not Supported 00:25:52.703 Reset Timeout: 7500 ms 00:25:52.703 Doorbell Stride: 4 bytes 00:25:52.703 NVM Subsystem Reset: Not Supported 00:25:52.703 Command Sets Supported 00:25:52.703 NVM Command Set: Supported 00:25:52.703 Boot Partition: Not Supported 00:25:52.703 Memory Page Size Minimum: 4096 bytes 00:25:52.703 Memory Page Size Maximum: 4096 bytes 00:25:52.703 Persistent Memory Region: Not Supported 00:25:52.703 Optional Asynchronous Events Supported 00:25:52.703 Namespace Attribute Notices: Supported 00:25:52.703 Firmware Activation Notices: Not Supported 00:25:52.703 ANA Change Notices: Supported 00:25:52.703 PLE Aggregate Log Change Notices: Not Supported 00:25:52.703 LBA Status Info Alert Notices: Not Supported 00:25:52.703 EGE Aggregate Log Change Notices: Not Supported 00:25:52.703 Normal NVM Subsystem Shutdown event: Not Supported 00:25:52.703 Zone Descriptor Change Notices: Not Supported 00:25:52.703 Discovery Log Change Notices: Not Supported 00:25:52.703 Controller Attributes 00:25:52.703 128-bit Host Identifier: Supported 00:25:52.703 Non-Operational Permissive Mode: Not Supported 00:25:52.703 NVM Sets: Not Supported 00:25:52.703 Read Recovery Levels: Not Supported 00:25:52.703 Endurance Groups: Not Supported 00:25:52.703 Predictable Latency Mode: Not Supported 00:25:52.703 Traffic Based Keep ALive: Supported 00:25:52.703 Namespace Granularity: Not Supported 00:25:52.703 SQ Associations: Not Supported 00:25:52.703 UUID List: Not Supported 00:25:52.703 Multi-Domain Subsystem: Not Supported 00:25:52.703 Fixed Capacity Management: Not Supported 00:25:52.703 Variable Capacity Management: Not Supported 00:25:52.703 Delete Endurance Group: Not Supported 00:25:52.703 Delete NVM Set: Not Supported 00:25:52.703 Extended LBA Formats Supported: Not Supported 00:25:52.703 Flexible Data Placement Supported: Not Supported 00:25:52.703 00:25:52.703 Controller Memory Buffer Support 00:25:52.703 ================================ 00:25:52.703 Supported: No 00:25:52.703 00:25:52.703 Persistent Memory Region Support 00:25:52.703 ================================ 00:25:52.703 Supported: No 00:25:52.703 00:25:52.703 Admin Command Set Attributes 00:25:52.703 ============================ 00:25:52.703 Security Send/Receive: Not Supported 00:25:52.703 Format NVM: Not Supported 00:25:52.703 Firmware Activate/Download: Not Supported 00:25:52.703 Namespace Management: Not Supported 00:25:52.703 Device Self-Test: Not Supported 00:25:52.703 Directives: Not Supported 00:25:52.703 NVMe-MI: Not Supported 00:25:52.703 Virtualization Management: Not Supported 00:25:52.703 Doorbell Buffer Config: Not Supported 00:25:52.703 Get LBA Status Capability: Not Supported 00:25:52.703 Command & Feature Lockdown Capability: Not Supported 00:25:52.703 Abort Command Limit: 4 00:25:52.703 Async Event Request Limit: 4 00:25:52.703 Number of Firmware Slots: N/A 00:25:52.704 Firmware Slot 1 Read-Only: N/A 00:25:52.704 Firmware Activation Without Reset: N/A 00:25:52.704 Multiple Update Detection Support: N/A 00:25:52.704 Firmware Update Granularity: No Information Provided 00:25:52.704 Per-Namespace SMART Log: Yes 00:25:52.704 Asymmetric Namespace Access Log Page: Supported 00:25:52.704 ANA Transition Time : 10 sec 00:25:52.704 00:25:52.704 Asymmetric Namespace Access Capabilities 00:25:52.704 ANA Optimized State : Supported 00:25:52.704 ANA Non-Optimized State : Supported 00:25:52.704 ANA Inaccessible State : Supported 00:25:52.704 ANA Persistent Loss State : Supported 00:25:52.704 ANA Change State : Supported 00:25:52.704 ANAGRPID is not changed : No 00:25:52.704 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:52.704 00:25:52.704 ANA Group Identifier Maximum : 128 00:25:52.704 Number of ANA Group Identifiers : 128 00:25:52.704 Max Number of Allowed Namespaces : 1024 00:25:52.704 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:52.704 Command Effects Log Page: Supported 00:25:52.704 Get Log Page Extended Data: Supported 00:25:52.704 Telemetry Log Pages: Not Supported 00:25:52.704 Persistent Event Log Pages: Not Supported 00:25:52.704 Supported Log Pages Log Page: May Support 00:25:52.704 Commands Supported & Effects Log Page: Not Supported 00:25:52.704 Feature Identifiers & Effects Log Page:May Support 00:25:52.704 NVMe-MI Commands & Effects Log Page: May Support 00:25:52.704 Data Area 4 for Telemetry Log: Not Supported 00:25:52.704 Error Log Page Entries Supported: 128 00:25:52.704 Keep Alive: Supported 00:25:52.704 Keep Alive Granularity: 1000 ms 00:25:52.704 00:25:52.704 NVM Command Set Attributes 00:25:52.704 ========================== 00:25:52.704 Submission Queue Entry Size 00:25:52.704 Max: 64 00:25:52.704 Min: 64 00:25:52.704 Completion Queue Entry Size 00:25:52.704 Max: 16 00:25:52.704 Min: 16 00:25:52.704 Number of Namespaces: 1024 00:25:52.704 Compare Command: Not Supported 00:25:52.704 Write Uncorrectable Command: Not Supported 00:25:52.704 Dataset Management Command: Supported 00:25:52.704 Write Zeroes Command: Supported 00:25:52.704 Set Features Save Field: Not Supported 00:25:52.704 Reservations: Not Supported 00:25:52.704 Timestamp: Not Supported 00:25:52.704 Copy: Not Supported 00:25:52.704 Volatile Write Cache: Present 00:25:52.704 Atomic Write Unit (Normal): 1 00:25:52.704 Atomic Write Unit (PFail): 1 00:25:52.704 Atomic Compare & Write Unit: 1 00:25:52.704 Fused Compare & Write: Not Supported 00:25:52.704 Scatter-Gather List 00:25:52.704 SGL Command Set: Supported 00:25:52.704 SGL Keyed: Not Supported 00:25:52.704 SGL Bit Bucket Descriptor: Not Supported 00:25:52.704 SGL Metadata Pointer: Not Supported 00:25:52.704 Oversized SGL: Not Supported 00:25:52.704 SGL Metadata Address: Not Supported 00:25:52.704 SGL Offset: Supported 00:25:52.704 Transport SGL Data Block: Not Supported 00:25:52.704 Replay Protected Memory Block: Not Supported 00:25:52.704 00:25:52.704 Firmware Slot Information 00:25:52.704 ========================= 00:25:52.704 Active slot: 0 00:25:52.704 00:25:52.704 Asymmetric Namespace Access 00:25:52.704 =========================== 00:25:52.704 Change Count : 0 00:25:52.704 Number of ANA Group Descriptors : 1 00:25:52.704 ANA Group Descriptor : 0 00:25:52.704 ANA Group ID : 1 00:25:52.704 Number of NSID Values : 1 00:25:52.704 Change Count : 0 00:25:52.704 ANA State : 1 00:25:52.704 Namespace Identifier : 1 00:25:52.704 00:25:52.704 Commands Supported and Effects 00:25:52.704 ============================== 00:25:52.704 Admin Commands 00:25:52.704 -------------- 00:25:52.704 Get Log Page (02h): Supported 00:25:52.704 Identify (06h): Supported 00:25:52.704 Abort (08h): Supported 00:25:52.704 Set Features (09h): Supported 00:25:52.704 Get Features (0Ah): Supported 00:25:52.704 Asynchronous Event Request (0Ch): Supported 00:25:52.704 Keep Alive (18h): Supported 00:25:52.704 I/O Commands 00:25:52.704 ------------ 00:25:52.704 Flush (00h): Supported 00:25:52.704 Write (01h): Supported LBA-Change 00:25:52.704 Read (02h): Supported 00:25:52.704 Write Zeroes (08h): Supported LBA-Change 00:25:52.704 Dataset Management (09h): Supported 00:25:52.704 00:25:52.704 Error Log 00:25:52.704 ========= 00:25:52.704 Entry: 0 00:25:52.704 Error Count: 0x3 00:25:52.704 Submission Queue Id: 0x0 00:25:52.704 Command Id: 0x5 00:25:52.704 Phase Bit: 0 00:25:52.704 Status Code: 0x2 00:25:52.704 Status Code Type: 0x0 00:25:52.704 Do Not Retry: 1 00:25:52.704 Error Location: 0x28 00:25:52.704 LBA: 0x0 00:25:52.704 Namespace: 0x0 00:25:52.704 Vendor Log Page: 0x0 00:25:52.704 ----------- 00:25:52.704 Entry: 1 00:25:52.704 Error Count: 0x2 00:25:52.704 Submission Queue Id: 0x0 00:25:52.704 Command Id: 0x5 00:25:52.704 Phase Bit: 0 00:25:52.704 Status Code: 0x2 00:25:52.704 Status Code Type: 0x0 00:25:52.704 Do Not Retry: 1 00:25:52.704 Error Location: 0x28 00:25:52.704 LBA: 0x0 00:25:52.704 Namespace: 0x0 00:25:52.704 Vendor Log Page: 0x0 00:25:52.704 ----------- 00:25:52.704 Entry: 2 00:25:52.704 Error Count: 0x1 00:25:52.704 Submission Queue Id: 0x0 00:25:52.704 Command Id: 0x4 00:25:52.704 Phase Bit: 0 00:25:52.704 Status Code: 0x2 00:25:52.704 Status Code Type: 0x0 00:25:52.704 Do Not Retry: 1 00:25:52.704 Error Location: 0x28 00:25:52.704 LBA: 0x0 00:25:52.704 Namespace: 0x0 00:25:52.704 Vendor Log Page: 0x0 00:25:52.704 00:25:52.704 Number of Queues 00:25:52.704 ================ 00:25:52.704 Number of I/O Submission Queues: 128 00:25:52.704 Number of I/O Completion Queues: 128 00:25:52.704 00:25:52.704 ZNS Specific Controller Data 00:25:52.704 ============================ 00:25:52.704 Zone Append Size Limit: 0 00:25:52.704 00:25:52.704 00:25:52.704 Active Namespaces 00:25:52.704 ================= 00:25:52.704 get_feature(0x05) failed 00:25:52.704 Namespace ID:1 00:25:52.704 Command Set Identifier: NVM (00h) 00:25:52.704 Deallocate: Supported 00:25:52.704 Deallocated/Unwritten Error: Not Supported 00:25:52.704 Deallocated Read Value: Unknown 00:25:52.704 Deallocate in Write Zeroes: Not Supported 00:25:52.704 Deallocated Guard Field: 0xFFFF 00:25:52.704 Flush: Supported 00:25:52.704 Reservation: Not Supported 00:25:52.704 Namespace Sharing Capabilities: Multiple Controllers 00:25:52.704 Size (in LBAs): 3125627568 (1490GiB) 00:25:52.704 Capacity (in LBAs): 3125627568 (1490GiB) 00:25:52.704 Utilization (in LBAs): 3125627568 (1490GiB) 00:25:52.704 UUID: 2207544c-f7bb-4d22-9aec-99f0c8811a0d 00:25:52.704 Thin Provisioning: Not Supported 00:25:52.704 Per-NS Atomic Units: Yes 00:25:52.704 Atomic Boundary Size (Normal): 0 00:25:52.704 Atomic Boundary Size (PFail): 0 00:25:52.704 Atomic Boundary Offset: 0 00:25:52.704 NGUID/EUI64 Never Reused: No 00:25:52.704 ANA group ID: 1 00:25:52.704 Namespace Write Protected: No 00:25:52.704 Number of LBA Formats: 1 00:25:52.704 Current LBA Format: LBA Format #00 00:25:52.704 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:52.704 00:25:52.704 22:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:52.704 22:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:52.704 22:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:25:52.704 22:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:52.704 22:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:25:52.704 22:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:52.704 22:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:52.704 rmmod nvme_tcp 00:25:52.704 rmmod nvme_fabrics 00:25:52.704 22:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:52.704 22:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:25:52.704 22:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:25:52.704 22:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:25:52.704 22:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:52.704 22:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:52.704 22:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:52.704 22:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:52.704 22:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:52.705 22:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:52.705 22:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:52.705 22:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:55.270 22:13:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:55.270 22:13:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:55.270 22:13:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:55.270 22:13:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:25:55.270 22:13:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:55.270 22:13:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:55.270 22:13:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:55.270 22:13:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:55.270 22:13:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:55.270 22:13:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:55.270 22:13:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:57.806 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:57.806 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:57.806 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:57.806 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:57.806 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:57.806 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:57.806 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:57.806 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:57.806 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:57.806 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:57.806 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:57.806 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:57.806 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:57.806 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:57.806 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:57.806 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:59.711 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:25:59.711 00:25:59.711 real 0m18.242s 00:25:59.711 user 0m4.118s 00:25:59.711 sys 0m9.730s 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:59.711 ************************************ 00:25:59.711 END TEST nvmf_identify_kernel_target 00:25:59.711 ************************************ 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.711 ************************************ 00:25:59.711 START TEST nvmf_auth_host 00:25:59.711 ************************************ 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:59.711 * Looking for test storage... 00:25:59.711 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:59.711 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:59.712 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:25:59.712 22:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.283 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:06.283 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:26:06.283 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:06.283 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:06.283 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:06.283 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:06.283 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:06.283 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:26:06.283 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:06.283 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:26:06.283 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:26:06.283 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:26:06.283 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:26:06.283 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:26:06.283 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:26:06.283 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:06.283 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:06.284 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:06.284 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:06.284 Found net devices under 0000:af:00.0: cvl_0_0 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:06.284 Found net devices under 0000:af:00.1: cvl_0_1 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:06.284 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:06.284 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:26:06.284 00:26:06.284 --- 10.0.0.2 ping statistics --- 00:26:06.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:06.284 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:06.284 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:06.284 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:26:06.284 00:26:06.284 --- 10.0.0.1 ping statistics --- 00:26:06.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:06.284 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2822372 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2822372 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 2822372 ']' 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:06.284 22:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.223 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:07.223 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:26:07.223 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:07.223 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:07.223 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.223 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:07.223 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:07.223 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:26:07.223 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:07.223 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:07.223 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:07.223 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:07.223 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:07.223 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:07.223 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2427c65d914f30302107f102bc6296c6 00:26:07.223 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:07.223 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.QT9 00:26:07.223 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2427c65d914f30302107f102bc6296c6 0 00:26:07.223 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2427c65d914f30302107f102bc6296c6 0 00:26:07.223 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:07.223 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:07.223 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2427c65d914f30302107f102bc6296c6 00:26:07.223 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:07.223 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:07.482 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.QT9 00:26:07.482 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.QT9 00:26:07.482 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.QT9 00:26:07.482 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:26:07.482 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:07.482 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:07.482 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:07.482 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:26:07.482 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:26:07.482 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:07.482 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1fed2b97714710760e7f5f9ea8856581c69be0eabf338861da2687cabf2a1ade 00:26:07.482 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:26:07.482 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.COB 00:26:07.482 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1fed2b97714710760e7f5f9ea8856581c69be0eabf338861da2687cabf2a1ade 3 00:26:07.482 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1fed2b97714710760e7f5f9ea8856581c69be0eabf338861da2687cabf2a1ade 3 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1fed2b97714710760e7f5f9ea8856581c69be0eabf338861da2687cabf2a1ade 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.COB 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.COB 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.COB 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=407f929e3e4caeb0ad451231771464cdaee1e187a14702f4 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.s1a 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 407f929e3e4caeb0ad451231771464cdaee1e187a14702f4 0 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 407f929e3e4caeb0ad451231771464cdaee1e187a14702f4 0 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=407f929e3e4caeb0ad451231771464cdaee1e187a14702f4 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.s1a 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.s1a 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.s1a 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=489169e1cc29db2c5961168038c0086894a136889a02e71a 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.LAi 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 489169e1cc29db2c5961168038c0086894a136889a02e71a 2 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 489169e1cc29db2c5961168038c0086894a136889a02e71a 2 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=489169e1cc29db2c5961168038c0086894a136889a02e71a 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.LAi 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.LAi 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.LAi 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=10e6f8851729c7422ac10c9e76a3dc5e 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.0E9 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 10e6f8851729c7422ac10c9e76a3dc5e 1 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 10e6f8851729c7422ac10c9e76a3dc5e 1 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=10e6f8851729c7422ac10c9e76a3dc5e 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:26:07.483 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.0E9 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.0E9 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.0E9 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2561cd0c5d2348c15e8d81bc40f851bc 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.B4u 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2561cd0c5d2348c15e8d81bc40f851bc 1 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2561cd0c5d2348c15e8d81bc40f851bc 1 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2561cd0c5d2348c15e8d81bc40f851bc 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.B4u 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.B4u 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.B4u 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7a79e5c7b63e07357ce4804324182dc04d01d4b1ce27fd9f 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.cPw 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7a79e5c7b63e07357ce4804324182dc04d01d4b1ce27fd9f 2 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7a79e5c7b63e07357ce4804324182dc04d01d4b1ce27fd9f 2 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7a79e5c7b63e07357ce4804324182dc04d01d4b1ce27fd9f 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.cPw 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.cPw 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.cPw 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1f8a56626c2efa44fb98b31b23b6ee24 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.1kh 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1f8a56626c2efa44fb98b31b23b6ee24 0 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1f8a56626c2efa44fb98b31b23b6ee24 0 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1f8a56626c2efa44fb98b31b23b6ee24 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.1kh 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.1kh 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.1kh 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:07.743 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=bfcce695a3b9d56e2471a6c49aecc66f885e0ee97e193d4d9f7d3fd211706c5a 00:26:07.744 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:26:07.744 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.GOx 00:26:07.744 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key bfcce695a3b9d56e2471a6c49aecc66f885e0ee97e193d4d9f7d3fd211706c5a 3 00:26:07.744 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 bfcce695a3b9d56e2471a6c49aecc66f885e0ee97e193d4d9f7d3fd211706c5a 3 00:26:07.744 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:07.744 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:07.744 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=bfcce695a3b9d56e2471a6c49aecc66f885e0ee97e193d4d9f7d3fd211706c5a 00:26:07.744 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:26:07.744 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:08.003 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.GOx 00:26:08.003 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.GOx 00:26:08.003 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.GOx 00:26:08.003 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:26:08.003 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2822372 00:26:08.003 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 2822372 ']' 00:26:08.004 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:08.004 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:08.004 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:08.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:08.004 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:08.004 22:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.004 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:08.004 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:26:08.004 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:08.004 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.QT9 00:26:08.004 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.004 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.004 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.004 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.COB ]] 00:26:08.004 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.COB 00:26:08.004 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.004 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.004 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.004 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:08.004 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.s1a 00:26:08.004 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.004 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.004 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.004 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.LAi ]] 00:26:08.004 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.LAi 00:26:08.004 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.004 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.004 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.004 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:08.004 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.0E9 00:26:08.004 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.004 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.004 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.004 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.B4u ]] 00:26:08.004 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.B4u 00:26:08.004 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.004 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.004 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.004 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:08.004 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.cPw 00:26:08.004 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.004 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.004 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.004 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.1kh ]] 00:26:08.004 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.1kh 00:26:08.004 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.004 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.004 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.004 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:08.004 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.GOx 00:26:08.004 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.004 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.004 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.004 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:26:08.264 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:26:08.264 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:26:08.264 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:08.264 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:08.264 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:08.264 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.264 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.264 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:08.264 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.264 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:08.264 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:08.264 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:08.264 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:26:08.264 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:26:08.264 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:08.264 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:08.264 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:08.264 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:08.264 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:26:08.264 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:08.264 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:08.264 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:08.264 22:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:11.556 Waiting for block devices as requested 00:26:11.556 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:11.556 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:11.556 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:11.556 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:11.815 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:11.815 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:11.815 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:12.075 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:12.075 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:12.075 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:12.075 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:12.334 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:12.334 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:12.334 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:12.594 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:12.594 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:12.853 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:26:13.421 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:13.421 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:13.421 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:13.422 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:26:13.422 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:13.422 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:26:13.422 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:13.422 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:13.422 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:13.422 No valid GPT data, bailing 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.1 -t tcp -s 4420 00:26:13.682 00:26:13.682 Discovery Log Number of Records 2, Generation counter 2 00:26:13.682 =====Discovery Log Entry 0====== 00:26:13.682 trtype: tcp 00:26:13.682 adrfam: ipv4 00:26:13.682 subtype: current discovery subsystem 00:26:13.682 treq: not specified, sq flow control disable supported 00:26:13.682 portid: 1 00:26:13.682 trsvcid: 4420 00:26:13.682 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:13.682 traddr: 10.0.0.1 00:26:13.682 eflags: none 00:26:13.682 sectype: none 00:26:13.682 =====Discovery Log Entry 1====== 00:26:13.682 trtype: tcp 00:26:13.682 adrfam: ipv4 00:26:13.682 subtype: nvme subsystem 00:26:13.682 treq: not specified, sq flow control disable supported 00:26:13.682 portid: 1 00:26:13.682 trsvcid: 4420 00:26:13.682 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:13.682 traddr: 10.0.0.1 00:26:13.682 eflags: none 00:26:13.682 sectype: none 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDA3ZjkyOWUzZTRjYWViMGFkNDUxMjMxNzcxNDY0Y2RhZWUxZTE4N2ExNDcwMmY04e9xWw==: 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDA3ZjkyOWUzZTRjYWViMGFkNDUxMjMxNzcxNDY0Y2RhZWUxZTE4N2ExNDcwMmY04e9xWw==: 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: ]] 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.682 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.942 nvme0n1 00:26:13.942 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.942 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.942 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.942 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.942 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.942 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.942 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.942 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.942 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.942 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.942 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.942 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:13.942 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:13.942 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.942 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:13.942 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.942 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:13.942 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:13.942 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:13.942 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQyN2M2NWQ5MTRmMzAzMDIxMDdmMTAyYmM2Mjk2YzZIzIp7: 00:26:13.942 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWZlZDJiOTc3MTQ3MTA3NjBlN2Y1ZjllYTg4NTY1ODFjNjliZTBlYWJmMzM4ODYxZGEyNjg3Y2FiZjJhMWFkZQHnoVw=: 00:26:13.942 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:13.942 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:13.942 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQyN2M2NWQ5MTRmMzAzMDIxMDdmMTAyYmM2Mjk2YzZIzIp7: 00:26:13.942 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWZlZDJiOTc3MTQ3MTA3NjBlN2Y1ZjllYTg4NTY1ODFjNjliZTBlYWJmMzM4ODYxZGEyNjg3Y2FiZjJhMWFkZQHnoVw=: ]] 00:26:13.942 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWZlZDJiOTc3MTQ3MTA3NjBlN2Y1ZjllYTg4NTY1ODFjNjliZTBlYWJmMzM4ODYxZGEyNjg3Y2FiZjJhMWFkZQHnoVw=: 00:26:13.942 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:26:13.942 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.942 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:13.942 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:13.942 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:13.942 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.942 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:13.942 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.942 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.942 22:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.942 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.942 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:13.942 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:13.942 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:13.942 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.942 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.942 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:13.942 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.942 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:13.942 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:13.942 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:13.942 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:13.942 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.942 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.942 nvme0n1 00:26:13.942 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.942 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.942 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.942 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.942 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDA3ZjkyOWUzZTRjYWViMGFkNDUxMjMxNzcxNDY0Y2RhZWUxZTE4N2ExNDcwMmY04e9xWw==: 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDA3ZjkyOWUzZTRjYWViMGFkNDUxMjMxNzcxNDY0Y2RhZWUxZTE4N2ExNDcwMmY04e9xWw==: 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: ]] 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.202 nvme0n1 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.202 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.487 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.487 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.487 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:14.487 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTBlNmY4ODUxNzI5Yzc0MjJhYzEwYzllNzZhM2RjNWUgEmnl: 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjU2MWNkMGM1ZDIzNDhjMTVlOGQ4MWJjNDBmODUxYmOJuPFR: 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTBlNmY4ODUxNzI5Yzc0MjJhYzEwYzllNzZhM2RjNWUgEmnl: 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjU2MWNkMGM1ZDIzNDhjMTVlOGQ4MWJjNDBmODUxYmOJuPFR: ]] 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjU2MWNkMGM1ZDIzNDhjMTVlOGQ4MWJjNDBmODUxYmOJuPFR: 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.488 nvme0n1 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2E3OWU1YzdiNjNlMDczNTdjZTQ4MDQzMjQxODJkYzA0ZDAxZDRiMWNlMjdmZDlmGMOMLQ==: 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWY4YTU2NjI2YzJlZmE0NGZiOThiMzFiMjNiNmVlMjS50+T6: 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2E3OWU1YzdiNjNlMDczNTdjZTQ4MDQzMjQxODJkYzA0ZDAxZDRiMWNlMjdmZDlmGMOMLQ==: 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWY4YTU2NjI2YzJlZmE0NGZiOThiMzFiMjNiNmVlMjS50+T6: ]] 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWY4YTU2NjI2YzJlZmE0NGZiOThiMzFiMjNiNmVlMjS50+T6: 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.488 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.748 nvme0n1 00:26:14.748 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.748 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.748 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.748 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.748 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.748 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.748 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.748 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.748 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.748 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.748 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.748 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.748 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:14.748 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.748 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:14.748 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:14.748 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:14.748 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmZjY2U2OTVhM2I5ZDU2ZTI0NzFhNmM0OWFlY2M2NmY4ODVlMGVlOTdlMTkzZDRkOWY3ZDNmZDIxMTcwNmM1YQsmRTo=: 00:26:14.748 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:14.748 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:14.748 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:14.749 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmZjY2U2OTVhM2I5ZDU2ZTI0NzFhNmM0OWFlY2M2NmY4ODVlMGVlOTdlMTkzZDRkOWY3ZDNmZDIxMTcwNmM1YQsmRTo=: 00:26:14.749 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:14.749 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:14.749 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.749 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:14.749 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:14.749 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:14.749 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.749 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:14.749 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.749 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.749 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.749 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.749 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:14.749 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:14.749 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:14.749 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.749 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.749 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:14.749 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.749 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:14.749 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:14.749 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:14.749 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:14.749 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.749 22:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.008 nvme0n1 00:26:15.008 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.008 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.008 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.008 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.008 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.008 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.008 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.008 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.008 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.008 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.008 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.008 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:15.008 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.008 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:15.008 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.008 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:15.008 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:15.008 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:15.008 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQyN2M2NWQ5MTRmMzAzMDIxMDdmMTAyYmM2Mjk2YzZIzIp7: 00:26:15.008 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWZlZDJiOTc3MTQ3MTA3NjBlN2Y1ZjllYTg4NTY1ODFjNjliZTBlYWJmMzM4ODYxZGEyNjg3Y2FiZjJhMWFkZQHnoVw=: 00:26:15.008 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:15.008 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:15.008 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQyN2M2NWQ5MTRmMzAzMDIxMDdmMTAyYmM2Mjk2YzZIzIp7: 00:26:15.008 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWZlZDJiOTc3MTQ3MTA3NjBlN2Y1ZjllYTg4NTY1ODFjNjliZTBlYWJmMzM4ODYxZGEyNjg3Y2FiZjJhMWFkZQHnoVw=: ]] 00:26:15.008 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWZlZDJiOTc3MTQ3MTA3NjBlN2Y1ZjllYTg4NTY1ODFjNjliZTBlYWJmMzM4ODYxZGEyNjg3Y2FiZjJhMWFkZQHnoVw=: 00:26:15.008 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:15.008 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.008 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:15.008 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:15.008 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:15.008 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.008 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:15.008 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.008 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.008 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.008 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.008 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:15.008 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:15.008 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:15.008 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.008 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.008 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:15.008 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.008 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:15.008 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:15.008 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:15.008 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:15.008 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.008 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.268 nvme0n1 00:26:15.268 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.268 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.268 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.268 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.268 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.268 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.268 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.268 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.268 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.268 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.268 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.268 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.268 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:15.268 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.268 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:15.268 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:15.268 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:15.268 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDA3ZjkyOWUzZTRjYWViMGFkNDUxMjMxNzcxNDY0Y2RhZWUxZTE4N2ExNDcwMmY04e9xWw==: 00:26:15.268 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: 00:26:15.268 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:15.268 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:15.268 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDA3ZjkyOWUzZTRjYWViMGFkNDUxMjMxNzcxNDY0Y2RhZWUxZTE4N2ExNDcwMmY04e9xWw==: 00:26:15.268 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: ]] 00:26:15.268 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: 00:26:15.268 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:15.268 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.268 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:15.268 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:15.268 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:15.268 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.268 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:15.268 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.268 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.268 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.268 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.268 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:15.268 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:15.268 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:15.268 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.268 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.268 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:15.268 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.268 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:15.268 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:15.268 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:15.268 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:15.268 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.268 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.528 nvme0n1 00:26:15.528 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.528 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.528 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.528 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.528 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.528 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.528 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.528 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.528 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.528 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.528 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.528 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.528 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:15.528 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.528 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:15.528 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:15.528 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:15.528 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTBlNmY4ODUxNzI5Yzc0MjJhYzEwYzllNzZhM2RjNWUgEmnl: 00:26:15.528 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjU2MWNkMGM1ZDIzNDhjMTVlOGQ4MWJjNDBmODUxYmOJuPFR: 00:26:15.528 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:15.528 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:15.528 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTBlNmY4ODUxNzI5Yzc0MjJhYzEwYzllNzZhM2RjNWUgEmnl: 00:26:15.528 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjU2MWNkMGM1ZDIzNDhjMTVlOGQ4MWJjNDBmODUxYmOJuPFR: ]] 00:26:15.528 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjU2MWNkMGM1ZDIzNDhjMTVlOGQ4MWJjNDBmODUxYmOJuPFR: 00:26:15.528 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:15.528 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.528 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:15.528 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:15.528 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:15.528 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.528 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:15.528 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.528 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.528 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.528 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.528 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:15.528 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:15.528 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:15.528 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.528 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.528 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:15.528 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.528 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:15.528 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:15.528 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:15.528 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:15.528 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.528 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.788 nvme0n1 00:26:15.788 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.788 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.788 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.788 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.788 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.788 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.788 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.788 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.788 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.788 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.788 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.788 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.788 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:15.788 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.788 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:15.788 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:15.788 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:15.788 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2E3OWU1YzdiNjNlMDczNTdjZTQ4MDQzMjQxODJkYzA0ZDAxZDRiMWNlMjdmZDlmGMOMLQ==: 00:26:15.788 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWY4YTU2NjI2YzJlZmE0NGZiOThiMzFiMjNiNmVlMjS50+T6: 00:26:15.788 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:15.788 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:15.788 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2E3OWU1YzdiNjNlMDczNTdjZTQ4MDQzMjQxODJkYzA0ZDAxZDRiMWNlMjdmZDlmGMOMLQ==: 00:26:15.788 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWY4YTU2NjI2YzJlZmE0NGZiOThiMzFiMjNiNmVlMjS50+T6: ]] 00:26:15.788 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWY4YTU2NjI2YzJlZmE0NGZiOThiMzFiMjNiNmVlMjS50+T6: 00:26:15.788 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:15.788 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.788 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:15.788 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:15.788 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:15.788 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.788 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:15.788 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.788 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.788 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.788 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.788 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:15.788 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:15.788 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:15.788 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.788 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.788 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:15.788 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.788 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:15.788 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:15.788 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:15.788 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:15.788 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.788 22:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.047 nvme0n1 00:26:16.047 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.047 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.047 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.047 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.047 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.047 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.047 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.047 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.047 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.047 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.047 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.047 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.047 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:16.047 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.047 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:16.047 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:16.047 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:16.047 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmZjY2U2OTVhM2I5ZDU2ZTI0NzFhNmM0OWFlY2M2NmY4ODVlMGVlOTdlMTkzZDRkOWY3ZDNmZDIxMTcwNmM1YQsmRTo=: 00:26:16.047 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:16.047 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:16.047 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:16.047 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmZjY2U2OTVhM2I5ZDU2ZTI0NzFhNmM0OWFlY2M2NmY4ODVlMGVlOTdlMTkzZDRkOWY3ZDNmZDIxMTcwNmM1YQsmRTo=: 00:26:16.047 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:16.047 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:16.047 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.047 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:16.048 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:16.048 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:16.048 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.048 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:16.048 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.048 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.048 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.048 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.048 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:16.048 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:16.048 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:16.048 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.048 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.048 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:16.048 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.048 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:16.048 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:16.048 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:16.048 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:16.048 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.048 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.307 nvme0n1 00:26:16.307 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.307 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.307 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.307 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.307 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.307 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.307 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.307 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.307 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.307 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.307 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.307 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:16.307 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.307 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:16.307 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.307 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:16.307 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:16.307 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:16.307 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQyN2M2NWQ5MTRmMzAzMDIxMDdmMTAyYmM2Mjk2YzZIzIp7: 00:26:16.307 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWZlZDJiOTc3MTQ3MTA3NjBlN2Y1ZjllYTg4NTY1ODFjNjliZTBlYWJmMzM4ODYxZGEyNjg3Y2FiZjJhMWFkZQHnoVw=: 00:26:16.307 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:16.307 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:16.307 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQyN2M2NWQ5MTRmMzAzMDIxMDdmMTAyYmM2Mjk2YzZIzIp7: 00:26:16.307 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWZlZDJiOTc3MTQ3MTA3NjBlN2Y1ZjllYTg4NTY1ODFjNjliZTBlYWJmMzM4ODYxZGEyNjg3Y2FiZjJhMWFkZQHnoVw=: ]] 00:26:16.307 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWZlZDJiOTc3MTQ3MTA3NjBlN2Y1ZjllYTg4NTY1ODFjNjliZTBlYWJmMzM4ODYxZGEyNjg3Y2FiZjJhMWFkZQHnoVw=: 00:26:16.307 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:16.307 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.307 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:16.307 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:16.307 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:16.307 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.307 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:16.307 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.307 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.307 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.307 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.307 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:16.307 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:16.307 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:16.307 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.307 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.307 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:16.307 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.307 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:16.307 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:16.307 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:16.307 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:16.307 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.307 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.567 nvme0n1 00:26:16.567 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.567 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.567 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.567 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.567 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.567 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.567 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.567 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.567 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.567 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.567 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.567 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.567 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:16.567 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.567 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:16.567 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:16.567 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:16.567 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDA3ZjkyOWUzZTRjYWViMGFkNDUxMjMxNzcxNDY0Y2RhZWUxZTE4N2ExNDcwMmY04e9xWw==: 00:26:16.567 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: 00:26:16.567 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:16.567 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:16.567 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDA3ZjkyOWUzZTRjYWViMGFkNDUxMjMxNzcxNDY0Y2RhZWUxZTE4N2ExNDcwMmY04e9xWw==: 00:26:16.567 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: ]] 00:26:16.567 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: 00:26:16.567 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:16.567 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.567 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:16.567 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:16.567 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:16.567 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.567 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:16.567 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.567 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.567 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.567 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.567 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:16.567 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:16.567 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:16.567 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.567 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.567 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:16.567 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.567 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:16.567 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:16.567 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:16.567 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:16.567 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.567 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.827 nvme0n1 00:26:16.827 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.827 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.827 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.827 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.827 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.827 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.827 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.827 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.827 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.827 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.827 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.827 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.827 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:16.827 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.827 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:16.827 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:16.827 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:16.827 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTBlNmY4ODUxNzI5Yzc0MjJhYzEwYzllNzZhM2RjNWUgEmnl: 00:26:16.827 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjU2MWNkMGM1ZDIzNDhjMTVlOGQ4MWJjNDBmODUxYmOJuPFR: 00:26:16.827 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:16.827 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:16.827 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTBlNmY4ODUxNzI5Yzc0MjJhYzEwYzllNzZhM2RjNWUgEmnl: 00:26:16.827 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjU2MWNkMGM1ZDIzNDhjMTVlOGQ4MWJjNDBmODUxYmOJuPFR: ]] 00:26:16.827 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjU2MWNkMGM1ZDIzNDhjMTVlOGQ4MWJjNDBmODUxYmOJuPFR: 00:26:16.827 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:16.827 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.827 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:16.827 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:16.827 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:16.827 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.827 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:16.827 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.827 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.827 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.827 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.827 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:16.827 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:16.827 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:16.827 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.827 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.827 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:16.827 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.827 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:16.827 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:16.827 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:16.827 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:16.827 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.827 22:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.087 nvme0n1 00:26:17.087 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.087 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.087 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.087 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.087 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.087 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.087 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.087 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.087 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.087 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.087 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.087 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.087 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:17.087 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.087 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:17.087 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:17.087 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:17.087 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2E3OWU1YzdiNjNlMDczNTdjZTQ4MDQzMjQxODJkYzA0ZDAxZDRiMWNlMjdmZDlmGMOMLQ==: 00:26:17.087 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWY4YTU2NjI2YzJlZmE0NGZiOThiMzFiMjNiNmVlMjS50+T6: 00:26:17.087 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:17.087 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:17.087 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2E3OWU1YzdiNjNlMDczNTdjZTQ4MDQzMjQxODJkYzA0ZDAxZDRiMWNlMjdmZDlmGMOMLQ==: 00:26:17.087 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWY4YTU2NjI2YzJlZmE0NGZiOThiMzFiMjNiNmVlMjS50+T6: ]] 00:26:17.087 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWY4YTU2NjI2YzJlZmE0NGZiOThiMzFiMjNiNmVlMjS50+T6: 00:26:17.087 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:17.087 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.087 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:17.087 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:17.087 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:17.087 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.087 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:17.087 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.087 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.347 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.347 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.347 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.347 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.347 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.347 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.347 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.347 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:17.347 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.347 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:17.347 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:17.347 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:17.347 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:17.347 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.347 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.347 nvme0n1 00:26:17.347 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.606 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.606 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.606 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.606 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.606 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.606 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.606 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.606 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.606 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.606 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.606 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.606 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:17.606 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.606 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:17.606 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:17.606 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:17.606 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmZjY2U2OTVhM2I5ZDU2ZTI0NzFhNmM0OWFlY2M2NmY4ODVlMGVlOTdlMTkzZDRkOWY3ZDNmZDIxMTcwNmM1YQsmRTo=: 00:26:17.606 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:17.606 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:17.606 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:17.606 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmZjY2U2OTVhM2I5ZDU2ZTI0NzFhNmM0OWFlY2M2NmY4ODVlMGVlOTdlMTkzZDRkOWY3ZDNmZDIxMTcwNmM1YQsmRTo=: 00:26:17.606 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:17.606 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:17.606 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.606 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:17.606 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:17.606 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:17.606 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.606 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:17.606 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.606 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.606 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.606 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.606 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.606 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.606 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.606 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.606 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.606 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:17.606 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.606 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:17.606 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:17.606 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:17.606 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:17.606 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.606 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.865 nvme0n1 00:26:17.865 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.865 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.865 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.865 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.865 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.865 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.865 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.865 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.865 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.865 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.865 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.865 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:17.865 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.865 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:17.865 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.865 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:17.865 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:17.865 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:17.865 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQyN2M2NWQ5MTRmMzAzMDIxMDdmMTAyYmM2Mjk2YzZIzIp7: 00:26:17.865 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWZlZDJiOTc3MTQ3MTA3NjBlN2Y1ZjllYTg4NTY1ODFjNjliZTBlYWJmMzM4ODYxZGEyNjg3Y2FiZjJhMWFkZQHnoVw=: 00:26:17.865 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:17.865 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:17.865 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQyN2M2NWQ5MTRmMzAzMDIxMDdmMTAyYmM2Mjk2YzZIzIp7: 00:26:17.865 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWZlZDJiOTc3MTQ3MTA3NjBlN2Y1ZjllYTg4NTY1ODFjNjliZTBlYWJmMzM4ODYxZGEyNjg3Y2FiZjJhMWFkZQHnoVw=: ]] 00:26:17.865 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWZlZDJiOTc3MTQ3MTA3NjBlN2Y1ZjllYTg4NTY1ODFjNjliZTBlYWJmMzM4ODYxZGEyNjg3Y2FiZjJhMWFkZQHnoVw=: 00:26:17.865 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:17.865 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.865 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:17.865 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:17.865 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:17.865 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.865 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:17.865 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.865 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.865 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.865 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.865 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.865 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.865 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.865 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.866 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.866 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:17.866 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.866 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:17.866 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:17.866 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:17.866 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:17.866 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.866 22:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.124 nvme0n1 00:26:18.124 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.124 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.124 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.124 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.124 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.124 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.383 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.383 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.383 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.383 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.383 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.383 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.383 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:18.383 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.383 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:18.383 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:18.383 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:18.383 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDA3ZjkyOWUzZTRjYWViMGFkNDUxMjMxNzcxNDY0Y2RhZWUxZTE4N2ExNDcwMmY04e9xWw==: 00:26:18.383 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: 00:26:18.383 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:18.383 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:18.383 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDA3ZjkyOWUzZTRjYWViMGFkNDUxMjMxNzcxNDY0Y2RhZWUxZTE4N2ExNDcwMmY04e9xWw==: 00:26:18.383 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: ]] 00:26:18.383 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: 00:26:18.383 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:18.383 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.383 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:18.383 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:18.383 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:18.383 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.383 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:18.383 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.383 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.383 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.383 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.383 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:18.383 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.383 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.383 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.383 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.383 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:18.383 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.383 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:18.383 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:18.383 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:18.383 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:18.383 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.383 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.641 nvme0n1 00:26:18.641 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.641 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.641 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.641 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.641 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.641 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.641 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.641 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.641 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.641 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.641 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.641 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.641 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:18.641 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.641 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:18.641 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:18.641 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:18.641 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTBlNmY4ODUxNzI5Yzc0MjJhYzEwYzllNzZhM2RjNWUgEmnl: 00:26:18.641 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjU2MWNkMGM1ZDIzNDhjMTVlOGQ4MWJjNDBmODUxYmOJuPFR: 00:26:18.641 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:18.641 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:18.641 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTBlNmY4ODUxNzI5Yzc0MjJhYzEwYzllNzZhM2RjNWUgEmnl: 00:26:18.641 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjU2MWNkMGM1ZDIzNDhjMTVlOGQ4MWJjNDBmODUxYmOJuPFR: ]] 00:26:18.641 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjU2MWNkMGM1ZDIzNDhjMTVlOGQ4MWJjNDBmODUxYmOJuPFR: 00:26:18.641 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:18.641 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.641 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:18.641 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:18.641 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:18.641 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.641 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:18.641 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.641 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.641 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.641 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.641 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:18.641 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.641 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.641 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.641 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.641 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:18.641 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.641 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:18.641 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:18.641 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:18.642 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:18.642 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.642 22:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.209 nvme0n1 00:26:19.209 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.209 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.209 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.209 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.209 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.209 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.209 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.209 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.209 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.209 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.209 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.209 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.209 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:19.210 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.210 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:19.210 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:19.210 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:19.210 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2E3OWU1YzdiNjNlMDczNTdjZTQ4MDQzMjQxODJkYzA0ZDAxZDRiMWNlMjdmZDlmGMOMLQ==: 00:26:19.210 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWY4YTU2NjI2YzJlZmE0NGZiOThiMzFiMjNiNmVlMjS50+T6: 00:26:19.210 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:19.210 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:19.210 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2E3OWU1YzdiNjNlMDczNTdjZTQ4MDQzMjQxODJkYzA0ZDAxZDRiMWNlMjdmZDlmGMOMLQ==: 00:26:19.210 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWY4YTU2NjI2YzJlZmE0NGZiOThiMzFiMjNiNmVlMjS50+T6: ]] 00:26:19.210 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWY4YTU2NjI2YzJlZmE0NGZiOThiMzFiMjNiNmVlMjS50+T6: 00:26:19.210 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:19.210 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.210 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:19.210 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:19.210 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:19.210 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.210 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:19.210 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.210 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.210 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.210 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.210 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:19.210 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:19.210 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:19.210 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.210 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.210 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:19.210 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.210 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:19.210 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:19.210 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:19.210 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:19.210 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.210 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.469 nvme0n1 00:26:19.469 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.469 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.469 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.469 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.469 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.469 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.469 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.469 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.469 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.469 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.470 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.470 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.470 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:19.470 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.470 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:19.470 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:19.470 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:19.470 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmZjY2U2OTVhM2I5ZDU2ZTI0NzFhNmM0OWFlY2M2NmY4ODVlMGVlOTdlMTkzZDRkOWY3ZDNmZDIxMTcwNmM1YQsmRTo=: 00:26:19.470 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:19.470 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:19.470 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:19.470 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmZjY2U2OTVhM2I5ZDU2ZTI0NzFhNmM0OWFlY2M2NmY4ODVlMGVlOTdlMTkzZDRkOWY3ZDNmZDIxMTcwNmM1YQsmRTo=: 00:26:19.470 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:19.470 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:19.470 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.470 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:19.470 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:19.470 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:19.470 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.470 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:19.470 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.470 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.470 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.470 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.470 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:19.470 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:19.470 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:19.470 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.470 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.470 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:19.470 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.470 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:19.470 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:19.470 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:19.470 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:19.470 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.470 22:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.037 nvme0n1 00:26:20.037 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.037 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.037 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.037 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.037 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.037 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.037 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.037 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.037 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.037 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.037 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.037 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:20.037 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.037 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:20.037 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.037 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:20.037 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:20.037 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:20.037 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQyN2M2NWQ5MTRmMzAzMDIxMDdmMTAyYmM2Mjk2YzZIzIp7: 00:26:20.037 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWZlZDJiOTc3MTQ3MTA3NjBlN2Y1ZjllYTg4NTY1ODFjNjliZTBlYWJmMzM4ODYxZGEyNjg3Y2FiZjJhMWFkZQHnoVw=: 00:26:20.037 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:20.037 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:20.037 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQyN2M2NWQ5MTRmMzAzMDIxMDdmMTAyYmM2Mjk2YzZIzIp7: 00:26:20.037 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWZlZDJiOTc3MTQ3MTA3NjBlN2Y1ZjllYTg4NTY1ODFjNjliZTBlYWJmMzM4ODYxZGEyNjg3Y2FiZjJhMWFkZQHnoVw=: ]] 00:26:20.037 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWZlZDJiOTc3MTQ3MTA3NjBlN2Y1ZjllYTg4NTY1ODFjNjliZTBlYWJmMzM4ODYxZGEyNjg3Y2FiZjJhMWFkZQHnoVw=: 00:26:20.037 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:20.037 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.037 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:20.037 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:20.037 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:20.037 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.037 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:20.037 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.037 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.037 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.037 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.037 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:20.037 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:20.037 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:20.037 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.037 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.037 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:20.037 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.037 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:20.037 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:20.037 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:20.037 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:20.037 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.037 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.604 nvme0n1 00:26:20.604 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.604 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.604 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.604 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.604 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.604 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.604 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.604 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.604 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.604 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.604 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.604 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.604 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:20.604 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.605 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:20.605 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:20.605 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:20.605 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDA3ZjkyOWUzZTRjYWViMGFkNDUxMjMxNzcxNDY0Y2RhZWUxZTE4N2ExNDcwMmY04e9xWw==: 00:26:20.605 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: 00:26:20.605 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:20.605 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:20.605 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDA3ZjkyOWUzZTRjYWViMGFkNDUxMjMxNzcxNDY0Y2RhZWUxZTE4N2ExNDcwMmY04e9xWw==: 00:26:20.605 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: ]] 00:26:20.605 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: 00:26:20.605 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:20.605 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.605 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:20.605 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:20.605 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:20.605 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.605 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:20.605 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.605 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.605 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.605 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.605 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:20.605 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:20.605 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:20.605 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.605 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.605 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:20.605 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.605 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:20.605 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:20.605 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:20.605 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:20.605 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.605 22:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.174 nvme0n1 00:26:21.174 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.174 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.174 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.174 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.174 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.174 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.174 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.174 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.174 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.174 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.174 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.174 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.174 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:21.174 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.174 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:21.174 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:21.174 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:21.174 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTBlNmY4ODUxNzI5Yzc0MjJhYzEwYzllNzZhM2RjNWUgEmnl: 00:26:21.174 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjU2MWNkMGM1ZDIzNDhjMTVlOGQ4MWJjNDBmODUxYmOJuPFR: 00:26:21.174 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:21.174 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:21.174 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTBlNmY4ODUxNzI5Yzc0MjJhYzEwYzllNzZhM2RjNWUgEmnl: 00:26:21.174 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjU2MWNkMGM1ZDIzNDhjMTVlOGQ4MWJjNDBmODUxYmOJuPFR: ]] 00:26:21.174 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjU2MWNkMGM1ZDIzNDhjMTVlOGQ4MWJjNDBmODUxYmOJuPFR: 00:26:21.174 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:21.174 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.174 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:21.174 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:21.174 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:21.174 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.174 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:21.174 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.174 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.434 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.434 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.434 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:21.434 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:21.434 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:21.434 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.434 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.434 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:21.434 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.434 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:21.434 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:21.434 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:21.434 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:21.434 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.434 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.003 nvme0n1 00:26:22.003 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.003 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.003 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.003 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.003 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.003 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.003 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.003 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.003 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.003 22:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.003 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.003 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.003 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:22.003 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.003 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:22.003 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:22.003 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:22.003 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2E3OWU1YzdiNjNlMDczNTdjZTQ4MDQzMjQxODJkYzA0ZDAxZDRiMWNlMjdmZDlmGMOMLQ==: 00:26:22.003 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWY4YTU2NjI2YzJlZmE0NGZiOThiMzFiMjNiNmVlMjS50+T6: 00:26:22.003 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:22.003 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:22.003 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2E3OWU1YzdiNjNlMDczNTdjZTQ4MDQzMjQxODJkYzA0ZDAxZDRiMWNlMjdmZDlmGMOMLQ==: 00:26:22.003 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWY4YTU2NjI2YzJlZmE0NGZiOThiMzFiMjNiNmVlMjS50+T6: ]] 00:26:22.003 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWY4YTU2NjI2YzJlZmE0NGZiOThiMzFiMjNiNmVlMjS50+T6: 00:26:22.003 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:22.003 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.003 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:22.003 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:22.003 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:22.003 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.003 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:22.003 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.003 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.003 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.003 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.003 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:22.003 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:22.003 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:22.003 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.003 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.003 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:22.003 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.003 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:22.003 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:22.003 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:22.003 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:22.003 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.003 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.571 nvme0n1 00:26:22.571 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.571 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.571 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.571 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.571 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.571 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.571 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.571 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.571 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.571 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.571 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.571 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.571 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:22.571 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.571 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:22.571 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:22.571 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:22.571 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmZjY2U2OTVhM2I5ZDU2ZTI0NzFhNmM0OWFlY2M2NmY4ODVlMGVlOTdlMTkzZDRkOWY3ZDNmZDIxMTcwNmM1YQsmRTo=: 00:26:22.571 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:22.571 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:22.571 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:22.571 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmZjY2U2OTVhM2I5ZDU2ZTI0NzFhNmM0OWFlY2M2NmY4ODVlMGVlOTdlMTkzZDRkOWY3ZDNmZDIxMTcwNmM1YQsmRTo=: 00:26:22.571 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:22.571 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:22.571 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.571 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:22.571 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:22.571 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:22.571 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.571 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:22.571 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.571 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.571 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.571 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.571 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:22.571 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:22.571 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:22.571 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.571 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.571 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:22.571 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.571 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:22.571 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:22.571 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:22.571 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:22.572 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.572 22:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.138 nvme0n1 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQyN2M2NWQ5MTRmMzAzMDIxMDdmMTAyYmM2Mjk2YzZIzIp7: 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWZlZDJiOTc3MTQ3MTA3NjBlN2Y1ZjllYTg4NTY1ODFjNjliZTBlYWJmMzM4ODYxZGEyNjg3Y2FiZjJhMWFkZQHnoVw=: 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQyN2M2NWQ5MTRmMzAzMDIxMDdmMTAyYmM2Mjk2YzZIzIp7: 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWZlZDJiOTc3MTQ3MTA3NjBlN2Y1ZjllYTg4NTY1ODFjNjliZTBlYWJmMzM4ODYxZGEyNjg3Y2FiZjJhMWFkZQHnoVw=: ]] 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWZlZDJiOTc3MTQ3MTA3NjBlN2Y1ZjllYTg4NTY1ODFjNjliZTBlYWJmMzM4ODYxZGEyNjg3Y2FiZjJhMWFkZQHnoVw=: 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.138 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.398 nvme0n1 00:26:23.398 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.398 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.398 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.398 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.398 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.398 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.398 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.398 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.398 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.398 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.398 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.398 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.398 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:23.398 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.398 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:23.398 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:23.398 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:23.398 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDA3ZjkyOWUzZTRjYWViMGFkNDUxMjMxNzcxNDY0Y2RhZWUxZTE4N2ExNDcwMmY04e9xWw==: 00:26:23.398 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: 00:26:23.398 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:23.398 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:23.398 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDA3ZjkyOWUzZTRjYWViMGFkNDUxMjMxNzcxNDY0Y2RhZWUxZTE4N2ExNDcwMmY04e9xWw==: 00:26:23.398 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: ]] 00:26:23.398 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: 00:26:23.398 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:23.398 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.398 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:23.398 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:23.398 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:23.398 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.398 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:23.398 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.398 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.398 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.398 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.398 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:23.398 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:23.398 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:23.398 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.398 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.398 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:23.398 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.398 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:23.398 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:23.398 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:23.398 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:23.398 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.398 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.658 nvme0n1 00:26:23.658 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.658 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.658 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.658 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.658 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.658 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.658 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.658 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.658 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.658 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.658 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.658 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.658 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:23.658 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.658 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:23.658 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:23.658 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:23.658 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTBlNmY4ODUxNzI5Yzc0MjJhYzEwYzllNzZhM2RjNWUgEmnl: 00:26:23.658 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjU2MWNkMGM1ZDIzNDhjMTVlOGQ4MWJjNDBmODUxYmOJuPFR: 00:26:23.658 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:23.658 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:23.658 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTBlNmY4ODUxNzI5Yzc0MjJhYzEwYzllNzZhM2RjNWUgEmnl: 00:26:23.658 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjU2MWNkMGM1ZDIzNDhjMTVlOGQ4MWJjNDBmODUxYmOJuPFR: ]] 00:26:23.658 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjU2MWNkMGM1ZDIzNDhjMTVlOGQ4MWJjNDBmODUxYmOJuPFR: 00:26:23.658 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:23.658 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.658 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:23.658 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:23.658 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:23.658 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.658 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:23.658 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.658 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.659 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.659 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.659 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:23.659 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:23.659 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:23.659 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.659 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.659 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:23.659 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.659 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:23.659 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:23.659 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:23.659 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:23.659 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.659 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.919 nvme0n1 00:26:23.919 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.919 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.919 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.919 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.919 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.919 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.919 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.919 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.919 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.919 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.919 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.919 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.919 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:23.919 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.919 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:23.919 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:23.919 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:23.919 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2E3OWU1YzdiNjNlMDczNTdjZTQ4MDQzMjQxODJkYzA0ZDAxZDRiMWNlMjdmZDlmGMOMLQ==: 00:26:23.919 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWY4YTU2NjI2YzJlZmE0NGZiOThiMzFiMjNiNmVlMjS50+T6: 00:26:23.919 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:23.919 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:23.919 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2E3OWU1YzdiNjNlMDczNTdjZTQ4MDQzMjQxODJkYzA0ZDAxZDRiMWNlMjdmZDlmGMOMLQ==: 00:26:23.919 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWY4YTU2NjI2YzJlZmE0NGZiOThiMzFiMjNiNmVlMjS50+T6: ]] 00:26:23.919 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWY4YTU2NjI2YzJlZmE0NGZiOThiMzFiMjNiNmVlMjS50+T6: 00:26:23.919 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:23.919 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.919 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:23.919 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:23.919 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:23.919 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.919 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:23.919 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.919 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.919 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.919 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.919 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:23.919 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:23.919 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:23.919 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.919 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.919 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:23.919 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.919 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:23.919 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:23.919 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:23.919 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:23.919 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.919 22:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.919 nvme0n1 00:26:23.920 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.920 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.920 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.920 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.920 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.920 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmZjY2U2OTVhM2I5ZDU2ZTI0NzFhNmM0OWFlY2M2NmY4ODVlMGVlOTdlMTkzZDRkOWY3ZDNmZDIxMTcwNmM1YQsmRTo=: 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmZjY2U2OTVhM2I5ZDU2ZTI0NzFhNmM0OWFlY2M2NmY4ODVlMGVlOTdlMTkzZDRkOWY3ZDNmZDIxMTcwNmM1YQsmRTo=: 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.179 nvme0n1 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.179 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:24.180 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.180 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:24.180 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:24.180 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:24.180 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQyN2M2NWQ5MTRmMzAzMDIxMDdmMTAyYmM2Mjk2YzZIzIp7: 00:26:24.180 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWZlZDJiOTc3MTQ3MTA3NjBlN2Y1ZjllYTg4NTY1ODFjNjliZTBlYWJmMzM4ODYxZGEyNjg3Y2FiZjJhMWFkZQHnoVw=: 00:26:24.438 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:24.438 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:24.438 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQyN2M2NWQ5MTRmMzAzMDIxMDdmMTAyYmM2Mjk2YzZIzIp7: 00:26:24.438 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWZlZDJiOTc3MTQ3MTA3NjBlN2Y1ZjllYTg4NTY1ODFjNjliZTBlYWJmMzM4ODYxZGEyNjg3Y2FiZjJhMWFkZQHnoVw=: ]] 00:26:24.438 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWZlZDJiOTc3MTQ3MTA3NjBlN2Y1ZjllYTg4NTY1ODFjNjliZTBlYWJmMzM4ODYxZGEyNjg3Y2FiZjJhMWFkZQHnoVw=: 00:26:24.438 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:24.438 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.438 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:24.438 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:24.438 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:24.438 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.438 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:24.438 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.438 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.438 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.438 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.438 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:24.438 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.438 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.438 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.438 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.438 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:24.438 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.438 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:24.438 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:24.438 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:24.438 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:24.438 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.438 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.438 nvme0n1 00:26:24.438 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.438 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.438 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.438 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.438 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.438 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.438 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.438 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.438 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.438 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.438 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.438 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.438 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:24.438 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.438 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:24.438 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:24.438 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:24.439 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDA3ZjkyOWUzZTRjYWViMGFkNDUxMjMxNzcxNDY0Y2RhZWUxZTE4N2ExNDcwMmY04e9xWw==: 00:26:24.439 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: 00:26:24.439 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:24.439 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:24.439 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDA3ZjkyOWUzZTRjYWViMGFkNDUxMjMxNzcxNDY0Y2RhZWUxZTE4N2ExNDcwMmY04e9xWw==: 00:26:24.439 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: ]] 00:26:24.439 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: 00:26:24.439 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:24.439 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.439 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:24.439 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:24.439 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:24.439 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.439 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:24.439 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.439 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.697 nvme0n1 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTBlNmY4ODUxNzI5Yzc0MjJhYzEwYzllNzZhM2RjNWUgEmnl: 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjU2MWNkMGM1ZDIzNDhjMTVlOGQ4MWJjNDBmODUxYmOJuPFR: 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTBlNmY4ODUxNzI5Yzc0MjJhYzEwYzllNzZhM2RjNWUgEmnl: 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjU2MWNkMGM1ZDIzNDhjMTVlOGQ4MWJjNDBmODUxYmOJuPFR: ]] 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjU2MWNkMGM1ZDIzNDhjMTVlOGQ4MWJjNDBmODUxYmOJuPFR: 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.697 22:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.956 nvme0n1 00:26:24.956 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.956 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.956 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.956 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.956 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.957 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.957 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.957 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.957 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.957 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.957 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.957 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.957 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:24.957 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.957 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:24.957 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:24.957 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:24.957 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2E3OWU1YzdiNjNlMDczNTdjZTQ4MDQzMjQxODJkYzA0ZDAxZDRiMWNlMjdmZDlmGMOMLQ==: 00:26:24.957 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWY4YTU2NjI2YzJlZmE0NGZiOThiMzFiMjNiNmVlMjS50+T6: 00:26:24.957 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:24.957 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:24.957 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2E3OWU1YzdiNjNlMDczNTdjZTQ4MDQzMjQxODJkYzA0ZDAxZDRiMWNlMjdmZDlmGMOMLQ==: 00:26:24.957 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWY4YTU2NjI2YzJlZmE0NGZiOThiMzFiMjNiNmVlMjS50+T6: ]] 00:26:24.957 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWY4YTU2NjI2YzJlZmE0NGZiOThiMzFiMjNiNmVlMjS50+T6: 00:26:24.957 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:24.957 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.957 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:24.957 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:24.957 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:24.957 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.957 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:24.957 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.957 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.957 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.957 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.957 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:24.957 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.957 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.957 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.957 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.957 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:24.957 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.957 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:24.957 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:24.957 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:24.957 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:24.957 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.957 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.216 nvme0n1 00:26:25.216 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.216 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.216 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.216 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.216 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.216 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.216 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.216 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.216 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.216 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.216 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.216 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.216 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:25.216 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.216 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:25.216 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:25.216 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:25.216 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmZjY2U2OTVhM2I5ZDU2ZTI0NzFhNmM0OWFlY2M2NmY4ODVlMGVlOTdlMTkzZDRkOWY3ZDNmZDIxMTcwNmM1YQsmRTo=: 00:26:25.216 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:25.217 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:25.217 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:25.217 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmZjY2U2OTVhM2I5ZDU2ZTI0NzFhNmM0OWFlY2M2NmY4ODVlMGVlOTdlMTkzZDRkOWY3ZDNmZDIxMTcwNmM1YQsmRTo=: 00:26:25.217 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:25.217 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:25.217 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.217 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:25.217 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:25.217 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:25.217 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.217 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:25.217 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.217 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.217 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.217 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.217 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:25.217 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:25.217 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:25.217 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.217 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.217 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:25.217 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.217 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:25.217 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:25.217 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:25.217 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:25.217 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.217 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.477 nvme0n1 00:26:25.477 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.477 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.477 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.477 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.477 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.477 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.477 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.477 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.477 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.477 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.477 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.477 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:25.477 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.477 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:25.477 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.477 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:25.477 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:25.477 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:25.477 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQyN2M2NWQ5MTRmMzAzMDIxMDdmMTAyYmM2Mjk2YzZIzIp7: 00:26:25.477 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWZlZDJiOTc3MTQ3MTA3NjBlN2Y1ZjllYTg4NTY1ODFjNjliZTBlYWJmMzM4ODYxZGEyNjg3Y2FiZjJhMWFkZQHnoVw=: 00:26:25.477 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:25.477 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:25.477 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQyN2M2NWQ5MTRmMzAzMDIxMDdmMTAyYmM2Mjk2YzZIzIp7: 00:26:25.477 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWZlZDJiOTc3MTQ3MTA3NjBlN2Y1ZjllYTg4NTY1ODFjNjliZTBlYWJmMzM4ODYxZGEyNjg3Y2FiZjJhMWFkZQHnoVw=: ]] 00:26:25.477 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWZlZDJiOTc3MTQ3MTA3NjBlN2Y1ZjllYTg4NTY1ODFjNjliZTBlYWJmMzM4ODYxZGEyNjg3Y2FiZjJhMWFkZQHnoVw=: 00:26:25.477 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:25.477 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.477 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:25.477 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:25.477 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:25.477 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.477 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:25.477 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.477 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.477 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.477 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.477 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:25.477 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:25.477 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:25.477 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.477 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.477 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:25.477 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.477 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:25.477 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:25.477 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:25.477 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:25.477 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.477 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.737 nvme0n1 00:26:25.737 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.737 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.737 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.737 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.737 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.737 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.737 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.737 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.737 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.737 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.997 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.997 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.997 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:25.997 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.997 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:25.997 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:25.997 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:25.997 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDA3ZjkyOWUzZTRjYWViMGFkNDUxMjMxNzcxNDY0Y2RhZWUxZTE4N2ExNDcwMmY04e9xWw==: 00:26:25.997 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: 00:26:25.997 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:25.997 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:25.997 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDA3ZjkyOWUzZTRjYWViMGFkNDUxMjMxNzcxNDY0Y2RhZWUxZTE4N2ExNDcwMmY04e9xWw==: 00:26:25.997 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: ]] 00:26:25.997 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: 00:26:25.997 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:25.997 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.997 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:25.997 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:25.997 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:25.997 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.997 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:25.997 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.997 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.997 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.997 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.997 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:25.997 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:25.997 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:25.997 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.997 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.997 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:25.997 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.997 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:25.997 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:25.997 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:25.997 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:25.997 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.997 22:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.257 nvme0n1 00:26:26.257 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.257 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.257 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.257 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.257 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.257 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.257 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.257 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.257 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.257 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.257 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.257 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.257 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:26.258 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.258 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:26.258 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:26.258 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:26.258 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTBlNmY4ODUxNzI5Yzc0MjJhYzEwYzllNzZhM2RjNWUgEmnl: 00:26:26.258 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjU2MWNkMGM1ZDIzNDhjMTVlOGQ4MWJjNDBmODUxYmOJuPFR: 00:26:26.258 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:26.258 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:26.258 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTBlNmY4ODUxNzI5Yzc0MjJhYzEwYzllNzZhM2RjNWUgEmnl: 00:26:26.258 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjU2MWNkMGM1ZDIzNDhjMTVlOGQ4MWJjNDBmODUxYmOJuPFR: ]] 00:26:26.258 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjU2MWNkMGM1ZDIzNDhjMTVlOGQ4MWJjNDBmODUxYmOJuPFR: 00:26:26.258 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:26.258 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.258 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:26.258 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:26.258 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:26.258 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.258 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:26.258 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.258 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.258 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.258 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.258 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:26.258 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:26.258 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:26.258 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.258 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.258 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:26.258 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.258 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:26.258 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:26.258 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:26.258 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:26.258 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.258 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.517 nvme0n1 00:26:26.517 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.517 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.517 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.517 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.517 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.518 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.518 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.518 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.518 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.518 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.518 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.518 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.518 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:26.518 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.518 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:26.518 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:26.518 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:26.518 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2E3OWU1YzdiNjNlMDczNTdjZTQ4MDQzMjQxODJkYzA0ZDAxZDRiMWNlMjdmZDlmGMOMLQ==: 00:26:26.518 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWY4YTU2NjI2YzJlZmE0NGZiOThiMzFiMjNiNmVlMjS50+T6: 00:26:26.518 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:26.518 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:26.518 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2E3OWU1YzdiNjNlMDczNTdjZTQ4MDQzMjQxODJkYzA0ZDAxZDRiMWNlMjdmZDlmGMOMLQ==: 00:26:26.518 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWY4YTU2NjI2YzJlZmE0NGZiOThiMzFiMjNiNmVlMjS50+T6: ]] 00:26:26.518 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWY4YTU2NjI2YzJlZmE0NGZiOThiMzFiMjNiNmVlMjS50+T6: 00:26:26.518 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:26.518 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.518 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:26.518 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:26.518 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:26.518 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.518 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:26.518 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.518 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.518 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.518 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.518 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:26.518 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:26.518 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:26.518 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.518 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.518 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:26.518 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.518 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:26.518 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:26.518 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:26.518 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:26.518 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.518 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.780 nvme0n1 00:26:26.780 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.780 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.780 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.780 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.780 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.780 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.780 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.780 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.780 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.780 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.780 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.780 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.780 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:26.780 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.780 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:26.780 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:26.780 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:26.780 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmZjY2U2OTVhM2I5ZDU2ZTI0NzFhNmM0OWFlY2M2NmY4ODVlMGVlOTdlMTkzZDRkOWY3ZDNmZDIxMTcwNmM1YQsmRTo=: 00:26:26.780 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:26.780 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:26.780 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:26.780 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmZjY2U2OTVhM2I5ZDU2ZTI0NzFhNmM0OWFlY2M2NmY4ODVlMGVlOTdlMTkzZDRkOWY3ZDNmZDIxMTcwNmM1YQsmRTo=: 00:26:26.780 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:26.780 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:26.780 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.780 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:26.780 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:26.780 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:26.780 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.780 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:26.780 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.781 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.781 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.781 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.781 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:26.781 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:26.781 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:26.781 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.781 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.781 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:26.781 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.781 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:26.781 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:26.781 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:26.781 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:26.781 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.781 22:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.043 nvme0n1 00:26:27.043 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.043 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.043 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.043 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.043 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.043 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.043 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.043 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.043 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.043 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.043 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.043 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:27.043 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.043 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:27.043 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.043 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:27.044 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:27.044 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:27.044 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQyN2M2NWQ5MTRmMzAzMDIxMDdmMTAyYmM2Mjk2YzZIzIp7: 00:26:27.044 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWZlZDJiOTc3MTQ3MTA3NjBlN2Y1ZjllYTg4NTY1ODFjNjliZTBlYWJmMzM4ODYxZGEyNjg3Y2FiZjJhMWFkZQHnoVw=: 00:26:27.044 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:27.044 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:27.044 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQyN2M2NWQ5MTRmMzAzMDIxMDdmMTAyYmM2Mjk2YzZIzIp7: 00:26:27.044 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWZlZDJiOTc3MTQ3MTA3NjBlN2Y1ZjllYTg4NTY1ODFjNjliZTBlYWJmMzM4ODYxZGEyNjg3Y2FiZjJhMWFkZQHnoVw=: ]] 00:26:27.044 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWZlZDJiOTc3MTQ3MTA3NjBlN2Y1ZjllYTg4NTY1ODFjNjliZTBlYWJmMzM4ODYxZGEyNjg3Y2FiZjJhMWFkZQHnoVw=: 00:26:27.044 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:27.044 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.044 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:27.044 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:27.044 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:27.044 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.044 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:27.044 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.044 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.044 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.044 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.044 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:27.044 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:27.044 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:27.044 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.044 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.044 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:27.044 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.044 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:27.044 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:27.044 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:27.044 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:27.044 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.044 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.612 nvme0n1 00:26:27.612 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.612 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.612 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.612 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.612 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.612 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.612 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.612 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.612 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.612 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.612 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.612 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.612 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:27.612 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.612 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:27.612 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:27.612 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:27.612 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDA3ZjkyOWUzZTRjYWViMGFkNDUxMjMxNzcxNDY0Y2RhZWUxZTE4N2ExNDcwMmY04e9xWw==: 00:26:27.612 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: 00:26:27.612 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:27.612 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:27.612 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDA3ZjkyOWUzZTRjYWViMGFkNDUxMjMxNzcxNDY0Y2RhZWUxZTE4N2ExNDcwMmY04e9xWw==: 00:26:27.612 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: ]] 00:26:27.612 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: 00:26:27.612 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:27.612 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.612 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:27.612 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:27.612 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:27.612 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.612 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:27.612 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.612 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.612 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.612 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.612 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:27.612 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:27.612 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:27.612 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.612 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.612 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:27.612 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.612 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:27.612 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:27.612 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:27.612 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:27.612 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.612 22:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.871 nvme0n1 00:26:27.871 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.871 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.871 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.871 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.871 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.871 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.131 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.131 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.131 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.131 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.131 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.131 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.131 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:28.131 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.131 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:28.131 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:28.132 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:28.132 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTBlNmY4ODUxNzI5Yzc0MjJhYzEwYzllNzZhM2RjNWUgEmnl: 00:26:28.132 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjU2MWNkMGM1ZDIzNDhjMTVlOGQ4MWJjNDBmODUxYmOJuPFR: 00:26:28.132 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:28.132 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:28.132 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTBlNmY4ODUxNzI5Yzc0MjJhYzEwYzllNzZhM2RjNWUgEmnl: 00:26:28.132 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjU2MWNkMGM1ZDIzNDhjMTVlOGQ4MWJjNDBmODUxYmOJuPFR: ]] 00:26:28.132 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjU2MWNkMGM1ZDIzNDhjMTVlOGQ4MWJjNDBmODUxYmOJuPFR: 00:26:28.132 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:28.132 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.132 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:28.132 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:28.132 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:28.132 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.132 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:28.132 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.132 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.132 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.132 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.132 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:28.132 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:28.132 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:28.132 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.132 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.132 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:28.132 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.132 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:28.132 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:28.132 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:28.132 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:28.132 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.132 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.392 nvme0n1 00:26:28.392 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.392 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.392 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.392 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.392 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.392 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.392 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.392 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.392 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.392 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.392 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.392 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.392 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:28.392 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.392 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:28.392 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:28.392 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:28.392 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2E3OWU1YzdiNjNlMDczNTdjZTQ4MDQzMjQxODJkYzA0ZDAxZDRiMWNlMjdmZDlmGMOMLQ==: 00:26:28.392 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWY4YTU2NjI2YzJlZmE0NGZiOThiMzFiMjNiNmVlMjS50+T6: 00:26:28.392 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:28.392 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:28.392 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2E3OWU1YzdiNjNlMDczNTdjZTQ4MDQzMjQxODJkYzA0ZDAxZDRiMWNlMjdmZDlmGMOMLQ==: 00:26:28.392 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWY4YTU2NjI2YzJlZmE0NGZiOThiMzFiMjNiNmVlMjS50+T6: ]] 00:26:28.392 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWY4YTU2NjI2YzJlZmE0NGZiOThiMzFiMjNiNmVlMjS50+T6: 00:26:28.392 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:28.392 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.392 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:28.392 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:28.392 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:28.392 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.392 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:28.392 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.392 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.392 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.392 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.392 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:28.392 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:28.392 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:28.392 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.392 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.392 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:28.392 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.392 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:28.392 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:28.392 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:28.392 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:28.392 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.392 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.961 nvme0n1 00:26:28.961 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.961 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.961 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.961 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.961 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.961 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.961 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.961 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.961 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.961 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.961 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.961 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.961 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:28.961 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.961 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:28.961 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:28.961 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:28.961 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmZjY2U2OTVhM2I5ZDU2ZTI0NzFhNmM0OWFlY2M2NmY4ODVlMGVlOTdlMTkzZDRkOWY3ZDNmZDIxMTcwNmM1YQsmRTo=: 00:26:28.961 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:28.961 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:28.961 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:28.961 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmZjY2U2OTVhM2I5ZDU2ZTI0NzFhNmM0OWFlY2M2NmY4ODVlMGVlOTdlMTkzZDRkOWY3ZDNmZDIxMTcwNmM1YQsmRTo=: 00:26:28.961 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:28.961 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:28.961 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.961 22:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:28.961 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:28.961 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:28.961 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.961 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:28.961 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.961 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.961 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.961 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.961 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:28.961 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:28.961 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:28.961 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.961 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.961 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:28.961 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.961 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:28.961 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:28.962 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:28.962 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:28.962 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.962 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.221 nvme0n1 00:26:29.221 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.221 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.221 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.221 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.221 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.221 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.221 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.221 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.221 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.221 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.221 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.221 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:29.221 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.221 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:29.221 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.221 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:29.481 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:29.481 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:29.481 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQyN2M2NWQ5MTRmMzAzMDIxMDdmMTAyYmM2Mjk2YzZIzIp7: 00:26:29.481 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWZlZDJiOTc3MTQ3MTA3NjBlN2Y1ZjllYTg4NTY1ODFjNjliZTBlYWJmMzM4ODYxZGEyNjg3Y2FiZjJhMWFkZQHnoVw=: 00:26:29.481 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:29.481 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:29.481 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQyN2M2NWQ5MTRmMzAzMDIxMDdmMTAyYmM2Mjk2YzZIzIp7: 00:26:29.481 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWZlZDJiOTc3MTQ3MTA3NjBlN2Y1ZjllYTg4NTY1ODFjNjliZTBlYWJmMzM4ODYxZGEyNjg3Y2FiZjJhMWFkZQHnoVw=: ]] 00:26:29.481 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWZlZDJiOTc3MTQ3MTA3NjBlN2Y1ZjllYTg4NTY1ODFjNjliZTBlYWJmMzM4ODYxZGEyNjg3Y2FiZjJhMWFkZQHnoVw=: 00:26:29.481 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:29.481 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.481 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:29.481 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:29.481 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:29.481 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.481 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:29.481 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.481 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.481 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.481 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.481 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:29.481 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:29.481 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:29.481 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.481 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.481 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:29.481 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.481 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:29.481 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:29.481 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:29.481 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:29.481 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.481 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.051 nvme0n1 00:26:30.051 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.051 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.051 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.051 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.051 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.051 22:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.051 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.051 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.051 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.051 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.051 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.051 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.051 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:30.051 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.051 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:30.051 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:30.051 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:30.051 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDA3ZjkyOWUzZTRjYWViMGFkNDUxMjMxNzcxNDY0Y2RhZWUxZTE4N2ExNDcwMmY04e9xWw==: 00:26:30.051 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: 00:26:30.051 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:30.051 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:30.051 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDA3ZjkyOWUzZTRjYWViMGFkNDUxMjMxNzcxNDY0Y2RhZWUxZTE4N2ExNDcwMmY04e9xWw==: 00:26:30.051 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: ]] 00:26:30.051 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: 00:26:30.051 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:30.052 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.052 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:30.052 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:30.052 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:30.052 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.052 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:30.052 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.052 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.052 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.052 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.052 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:30.052 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:30.052 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:30.052 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.052 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.052 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:30.052 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.052 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:30.052 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:30.052 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:30.052 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:30.052 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.052 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.621 nvme0n1 00:26:30.621 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.621 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.621 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.621 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.621 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.621 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.621 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.621 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.621 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.621 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.621 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.621 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.621 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:30.621 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.621 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:30.621 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:30.621 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:30.621 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTBlNmY4ODUxNzI5Yzc0MjJhYzEwYzllNzZhM2RjNWUgEmnl: 00:26:30.621 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjU2MWNkMGM1ZDIzNDhjMTVlOGQ4MWJjNDBmODUxYmOJuPFR: 00:26:30.621 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:30.621 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:30.621 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTBlNmY4ODUxNzI5Yzc0MjJhYzEwYzllNzZhM2RjNWUgEmnl: 00:26:30.621 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjU2MWNkMGM1ZDIzNDhjMTVlOGQ4MWJjNDBmODUxYmOJuPFR: ]] 00:26:30.621 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjU2MWNkMGM1ZDIzNDhjMTVlOGQ4MWJjNDBmODUxYmOJuPFR: 00:26:30.621 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:30.621 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.621 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:30.621 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:30.621 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:30.621 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.621 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:30.621 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.621 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.621 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.621 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.621 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:30.621 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:30.621 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:30.621 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.621 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.621 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:30.621 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.621 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:30.621 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:30.621 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:30.621 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:30.621 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.622 22:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.191 nvme0n1 00:26:31.191 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.191 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.191 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.191 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.191 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.191 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.191 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.191 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.191 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.191 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.191 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.191 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.191 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:31.191 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.191 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:31.191 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:31.191 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:31.191 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2E3OWU1YzdiNjNlMDczNTdjZTQ4MDQzMjQxODJkYzA0ZDAxZDRiMWNlMjdmZDlmGMOMLQ==: 00:26:31.191 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWY4YTU2NjI2YzJlZmE0NGZiOThiMzFiMjNiNmVlMjS50+T6: 00:26:31.191 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:31.191 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:31.191 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2E3OWU1YzdiNjNlMDczNTdjZTQ4MDQzMjQxODJkYzA0ZDAxZDRiMWNlMjdmZDlmGMOMLQ==: 00:26:31.191 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWY4YTU2NjI2YzJlZmE0NGZiOThiMzFiMjNiNmVlMjS50+T6: ]] 00:26:31.191 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWY4YTU2NjI2YzJlZmE0NGZiOThiMzFiMjNiNmVlMjS50+T6: 00:26:31.191 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:31.191 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.191 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:31.191 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:31.191 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:31.191 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.191 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:31.191 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.191 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.191 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.191 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.191 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:31.191 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:31.191 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:31.191 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.191 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.191 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:31.191 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.191 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:31.191 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:31.191 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:31.191 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:31.191 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.191 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.761 nvme0n1 00:26:31.761 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.761 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.761 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.761 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.761 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.761 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.761 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.761 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.761 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.761 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.761 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.761 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.761 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:31.761 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.761 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:31.761 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:31.761 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:31.761 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmZjY2U2OTVhM2I5ZDU2ZTI0NzFhNmM0OWFlY2M2NmY4ODVlMGVlOTdlMTkzZDRkOWY3ZDNmZDIxMTcwNmM1YQsmRTo=: 00:26:31.761 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:31.761 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:31.761 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:31.761 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmZjY2U2OTVhM2I5ZDU2ZTI0NzFhNmM0OWFlY2M2NmY4ODVlMGVlOTdlMTkzZDRkOWY3ZDNmZDIxMTcwNmM1YQsmRTo=: 00:26:31.761 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:31.761 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:31.761 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.761 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:31.761 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:31.761 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:31.761 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.761 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:31.761 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.761 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.761 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.761 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.761 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:31.761 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:31.761 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:31.761 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.761 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.761 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:31.761 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.762 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:31.762 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:31.762 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:31.762 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:31.762 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.762 22:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.333 nvme0n1 00:26:32.333 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.333 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.333 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.333 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.333 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.333 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.593 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.593 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.593 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.593 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.593 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.593 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:32.593 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:32.593 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.593 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:32.593 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.593 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:32.593 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:32.593 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:32.593 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQyN2M2NWQ5MTRmMzAzMDIxMDdmMTAyYmM2Mjk2YzZIzIp7: 00:26:32.593 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWZlZDJiOTc3MTQ3MTA3NjBlN2Y1ZjllYTg4NTY1ODFjNjliZTBlYWJmMzM4ODYxZGEyNjg3Y2FiZjJhMWFkZQHnoVw=: 00:26:32.593 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:32.593 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:32.593 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQyN2M2NWQ5MTRmMzAzMDIxMDdmMTAyYmM2Mjk2YzZIzIp7: 00:26:32.593 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWZlZDJiOTc3MTQ3MTA3NjBlN2Y1ZjllYTg4NTY1ODFjNjliZTBlYWJmMzM4ODYxZGEyNjg3Y2FiZjJhMWFkZQHnoVw=: ]] 00:26:32.593 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWZlZDJiOTc3MTQ3MTA3NjBlN2Y1ZjllYTg4NTY1ODFjNjliZTBlYWJmMzM4ODYxZGEyNjg3Y2FiZjJhMWFkZQHnoVw=: 00:26:32.593 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:32.593 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.593 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:32.593 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:32.593 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:32.593 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.593 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:32.593 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.593 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.593 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.593 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.593 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:32.593 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:32.593 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:32.593 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.593 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.593 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:32.593 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.593 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:32.593 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:32.593 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:32.593 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:32.593 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.593 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.593 nvme0n1 00:26:32.594 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.594 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.594 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.594 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.594 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.594 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.594 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.594 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.594 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.594 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.853 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.853 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.853 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:32.853 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.853 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:32.853 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:32.853 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:32.853 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDA3ZjkyOWUzZTRjYWViMGFkNDUxMjMxNzcxNDY0Y2RhZWUxZTE4N2ExNDcwMmY04e9xWw==: 00:26:32.853 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: 00:26:32.853 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:32.853 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:32.853 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDA3ZjkyOWUzZTRjYWViMGFkNDUxMjMxNzcxNDY0Y2RhZWUxZTE4N2ExNDcwMmY04e9xWw==: 00:26:32.853 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: ]] 00:26:32.853 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: 00:26:32.853 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:32.854 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.854 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:32.854 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:32.854 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:32.854 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.854 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:32.854 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.854 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.854 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.854 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.854 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:32.854 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:32.854 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:32.854 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.854 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.854 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:32.854 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.854 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:32.854 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:32.854 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:32.854 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:32.854 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.854 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.854 nvme0n1 00:26:32.854 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.854 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.854 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.854 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.854 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.854 22:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.854 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.854 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.854 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.854 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.854 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.854 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.854 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:32.854 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.854 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:32.854 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:32.854 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:32.854 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTBlNmY4ODUxNzI5Yzc0MjJhYzEwYzllNzZhM2RjNWUgEmnl: 00:26:32.854 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjU2MWNkMGM1ZDIzNDhjMTVlOGQ4MWJjNDBmODUxYmOJuPFR: 00:26:32.854 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:32.854 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:32.854 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTBlNmY4ODUxNzI5Yzc0MjJhYzEwYzllNzZhM2RjNWUgEmnl: 00:26:32.854 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjU2MWNkMGM1ZDIzNDhjMTVlOGQ4MWJjNDBmODUxYmOJuPFR: ]] 00:26:32.854 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjU2MWNkMGM1ZDIzNDhjMTVlOGQ4MWJjNDBmODUxYmOJuPFR: 00:26:32.854 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:32.854 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.854 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:32.854 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:32.854 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:32.854 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.854 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:32.854 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.854 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.854 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.854 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.854 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:32.854 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:32.854 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:32.854 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.854 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.854 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:32.854 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.854 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:32.854 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:32.854 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:32.854 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:32.854 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.854 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.113 nvme0n1 00:26:33.114 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.114 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.114 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.114 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.114 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.114 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.114 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.114 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.114 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.114 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.114 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.114 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.114 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:33.114 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.114 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:33.114 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:33.114 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:33.114 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2E3OWU1YzdiNjNlMDczNTdjZTQ4MDQzMjQxODJkYzA0ZDAxZDRiMWNlMjdmZDlmGMOMLQ==: 00:26:33.114 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWY4YTU2NjI2YzJlZmE0NGZiOThiMzFiMjNiNmVlMjS50+T6: 00:26:33.114 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:33.114 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:33.114 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2E3OWU1YzdiNjNlMDczNTdjZTQ4MDQzMjQxODJkYzA0ZDAxZDRiMWNlMjdmZDlmGMOMLQ==: 00:26:33.114 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWY4YTU2NjI2YzJlZmE0NGZiOThiMzFiMjNiNmVlMjS50+T6: ]] 00:26:33.114 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWY4YTU2NjI2YzJlZmE0NGZiOThiMzFiMjNiNmVlMjS50+T6: 00:26:33.114 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:33.114 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.114 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:33.114 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:33.114 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:33.114 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.114 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:33.114 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.114 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.114 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.114 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.114 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:33.114 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:33.114 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:33.114 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.114 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.114 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:33.114 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.114 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:33.114 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:33.114 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:33.114 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:33.114 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.114 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.374 nvme0n1 00:26:33.374 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.374 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.374 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.374 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.374 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.374 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.374 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.374 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.374 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.374 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.374 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.374 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.374 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:33.374 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.374 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:33.374 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:33.374 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:33.374 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmZjY2U2OTVhM2I5ZDU2ZTI0NzFhNmM0OWFlY2M2NmY4ODVlMGVlOTdlMTkzZDRkOWY3ZDNmZDIxMTcwNmM1YQsmRTo=: 00:26:33.374 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:33.374 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:33.374 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:33.374 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmZjY2U2OTVhM2I5ZDU2ZTI0NzFhNmM0OWFlY2M2NmY4ODVlMGVlOTdlMTkzZDRkOWY3ZDNmZDIxMTcwNmM1YQsmRTo=: 00:26:33.374 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:33.374 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:33.374 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.374 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:33.374 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:33.374 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:33.374 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.374 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:33.374 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.374 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.374 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.374 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.374 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:33.374 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:33.374 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:33.374 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.374 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.374 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:33.374 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.374 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:33.374 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:33.374 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:33.374 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:33.374 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.374 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.703 nvme0n1 00:26:33.703 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.703 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.703 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.703 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.703 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.703 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.703 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.703 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.703 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.703 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.703 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.703 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:33.703 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.703 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:33.703 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.703 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:33.703 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:33.703 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:33.703 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQyN2M2NWQ5MTRmMzAzMDIxMDdmMTAyYmM2Mjk2YzZIzIp7: 00:26:33.703 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWZlZDJiOTc3MTQ3MTA3NjBlN2Y1ZjllYTg4NTY1ODFjNjliZTBlYWJmMzM4ODYxZGEyNjg3Y2FiZjJhMWFkZQHnoVw=: 00:26:33.703 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:33.703 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:33.703 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQyN2M2NWQ5MTRmMzAzMDIxMDdmMTAyYmM2Mjk2YzZIzIp7: 00:26:33.703 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWZlZDJiOTc3MTQ3MTA3NjBlN2Y1ZjllYTg4NTY1ODFjNjliZTBlYWJmMzM4ODYxZGEyNjg3Y2FiZjJhMWFkZQHnoVw=: ]] 00:26:33.703 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWZlZDJiOTc3MTQ3MTA3NjBlN2Y1ZjllYTg4NTY1ODFjNjliZTBlYWJmMzM4ODYxZGEyNjg3Y2FiZjJhMWFkZQHnoVw=: 00:26:33.703 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:33.703 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.703 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:33.703 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:33.703 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:33.703 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.703 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:33.703 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.703 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.704 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.704 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.704 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:33.704 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:33.704 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:33.704 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.704 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.704 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:33.704 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.704 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:33.704 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:33.704 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:33.704 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:33.704 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.704 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.704 nvme0n1 00:26:33.704 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.704 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.704 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.704 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.704 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.963 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.963 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.963 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.963 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.963 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.963 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.963 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.963 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:33.963 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.963 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:33.963 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:33.963 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:33.963 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDA3ZjkyOWUzZTRjYWViMGFkNDUxMjMxNzcxNDY0Y2RhZWUxZTE4N2ExNDcwMmY04e9xWw==: 00:26:33.963 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: 00:26:33.963 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:33.963 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:33.963 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDA3ZjkyOWUzZTRjYWViMGFkNDUxMjMxNzcxNDY0Y2RhZWUxZTE4N2ExNDcwMmY04e9xWw==: 00:26:33.963 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: ]] 00:26:33.963 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: 00:26:33.963 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:33.963 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.963 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:33.963 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:33.963 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:33.963 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.963 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:33.963 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.963 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.963 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.963 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.963 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:33.963 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:33.963 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:33.963 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.963 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.963 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:33.963 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.963 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:33.963 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:33.963 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:33.963 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:33.963 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.963 22:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.963 nvme0n1 00:26:33.963 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.963 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.963 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.963 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.963 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.223 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.223 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.223 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.223 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.223 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.223 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.223 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.223 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:34.223 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.223 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:34.223 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:34.223 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:34.223 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTBlNmY4ODUxNzI5Yzc0MjJhYzEwYzllNzZhM2RjNWUgEmnl: 00:26:34.223 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjU2MWNkMGM1ZDIzNDhjMTVlOGQ4MWJjNDBmODUxYmOJuPFR: 00:26:34.223 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:34.223 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:34.223 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTBlNmY4ODUxNzI5Yzc0MjJhYzEwYzllNzZhM2RjNWUgEmnl: 00:26:34.223 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjU2MWNkMGM1ZDIzNDhjMTVlOGQ4MWJjNDBmODUxYmOJuPFR: ]] 00:26:34.223 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjU2MWNkMGM1ZDIzNDhjMTVlOGQ4MWJjNDBmODUxYmOJuPFR: 00:26:34.223 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:34.223 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.223 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:34.223 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:34.223 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:34.223 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.223 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:34.223 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.223 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.223 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.223 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.223 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:34.223 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:34.223 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:34.223 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.223 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.223 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:34.223 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.223 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:34.223 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:34.223 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:34.223 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:34.223 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.223 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.223 nvme0n1 00:26:34.223 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.223 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.223 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.223 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.223 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.223 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.481 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.481 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.481 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.481 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.481 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.481 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.481 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:34.481 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.481 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:34.481 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:34.481 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:34.481 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2E3OWU1YzdiNjNlMDczNTdjZTQ4MDQzMjQxODJkYzA0ZDAxZDRiMWNlMjdmZDlmGMOMLQ==: 00:26:34.481 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWY4YTU2NjI2YzJlZmE0NGZiOThiMzFiMjNiNmVlMjS50+T6: 00:26:34.481 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:34.481 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:34.481 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2E3OWU1YzdiNjNlMDczNTdjZTQ4MDQzMjQxODJkYzA0ZDAxZDRiMWNlMjdmZDlmGMOMLQ==: 00:26:34.481 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWY4YTU2NjI2YzJlZmE0NGZiOThiMzFiMjNiNmVlMjS50+T6: ]] 00:26:34.481 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWY4YTU2NjI2YzJlZmE0NGZiOThiMzFiMjNiNmVlMjS50+T6: 00:26:34.481 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:34.481 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.481 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:34.481 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:34.481 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:34.481 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.481 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:34.481 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.481 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.481 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.482 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.482 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:34.482 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:34.482 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:34.482 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.482 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.482 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:34.482 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.482 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:34.482 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:34.482 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:34.482 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:34.482 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.482 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.482 nvme0n1 00:26:34.482 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.482 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.482 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.482 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.482 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.482 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.740 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.740 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.740 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.740 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.740 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.740 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.740 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:34.740 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.740 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:34.740 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:34.740 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:34.740 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmZjY2U2OTVhM2I5ZDU2ZTI0NzFhNmM0OWFlY2M2NmY4ODVlMGVlOTdlMTkzZDRkOWY3ZDNmZDIxMTcwNmM1YQsmRTo=: 00:26:34.740 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:34.740 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:34.740 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:34.740 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmZjY2U2OTVhM2I5ZDU2ZTI0NzFhNmM0OWFlY2M2NmY4ODVlMGVlOTdlMTkzZDRkOWY3ZDNmZDIxMTcwNmM1YQsmRTo=: 00:26:34.740 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:34.740 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:34.740 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.740 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:34.740 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:34.740 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:34.740 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.740 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:34.740 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.740 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.740 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.740 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.740 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:34.740 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:34.740 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:34.740 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.740 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.740 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:34.740 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.740 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:34.740 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:34.740 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:34.740 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:34.740 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.740 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.740 nvme0n1 00:26:34.740 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.740 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.740 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.740 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.740 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.740 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.998 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.998 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.998 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.998 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.998 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.998 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:34.998 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.998 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:34.998 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.998 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:34.998 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:34.998 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:34.998 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQyN2M2NWQ5MTRmMzAzMDIxMDdmMTAyYmM2Mjk2YzZIzIp7: 00:26:34.998 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWZlZDJiOTc3MTQ3MTA3NjBlN2Y1ZjllYTg4NTY1ODFjNjliZTBlYWJmMzM4ODYxZGEyNjg3Y2FiZjJhMWFkZQHnoVw=: 00:26:34.998 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:34.998 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:34.998 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQyN2M2NWQ5MTRmMzAzMDIxMDdmMTAyYmM2Mjk2YzZIzIp7: 00:26:34.998 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWZlZDJiOTc3MTQ3MTA3NjBlN2Y1ZjllYTg4NTY1ODFjNjliZTBlYWJmMzM4ODYxZGEyNjg3Y2FiZjJhMWFkZQHnoVw=: ]] 00:26:34.998 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWZlZDJiOTc3MTQ3MTA3NjBlN2Y1ZjllYTg4NTY1ODFjNjliZTBlYWJmMzM4ODYxZGEyNjg3Y2FiZjJhMWFkZQHnoVw=: 00:26:34.998 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:34.998 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.998 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:34.998 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:34.998 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:34.998 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.998 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:34.998 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.998 22:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.998 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.998 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.998 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:34.998 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:34.998 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:34.998 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.998 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.998 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:34.998 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.998 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:34.998 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:34.998 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:34.998 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:34.998 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.998 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.257 nvme0n1 00:26:35.257 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.257 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.257 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.257 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.257 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.257 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.257 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.257 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.257 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.257 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.257 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.257 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.257 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:35.257 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.257 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:35.257 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:35.257 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:35.257 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDA3ZjkyOWUzZTRjYWViMGFkNDUxMjMxNzcxNDY0Y2RhZWUxZTE4N2ExNDcwMmY04e9xWw==: 00:26:35.257 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: 00:26:35.257 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:35.257 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:35.257 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDA3ZjkyOWUzZTRjYWViMGFkNDUxMjMxNzcxNDY0Y2RhZWUxZTE4N2ExNDcwMmY04e9xWw==: 00:26:35.257 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: ]] 00:26:35.257 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: 00:26:35.257 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:35.257 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.257 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:35.257 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:35.257 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:35.257 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.257 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:35.257 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.257 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.257 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.257 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.257 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:35.257 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:35.257 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:35.257 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.257 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.257 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:35.257 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.257 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:35.257 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:35.257 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:35.257 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:35.257 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.257 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.516 nvme0n1 00:26:35.516 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.517 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.517 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.517 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.517 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.517 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.517 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.517 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.517 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.517 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.517 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.517 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.517 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:35.517 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.517 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:35.517 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:35.517 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:35.517 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTBlNmY4ODUxNzI5Yzc0MjJhYzEwYzllNzZhM2RjNWUgEmnl: 00:26:35.517 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjU2MWNkMGM1ZDIzNDhjMTVlOGQ4MWJjNDBmODUxYmOJuPFR: 00:26:35.517 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:35.517 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:35.517 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTBlNmY4ODUxNzI5Yzc0MjJhYzEwYzllNzZhM2RjNWUgEmnl: 00:26:35.517 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjU2MWNkMGM1ZDIzNDhjMTVlOGQ4MWJjNDBmODUxYmOJuPFR: ]] 00:26:35.517 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjU2MWNkMGM1ZDIzNDhjMTVlOGQ4MWJjNDBmODUxYmOJuPFR: 00:26:35.517 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:35.517 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.517 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:35.517 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:35.517 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:35.517 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.517 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:35.517 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.517 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.517 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.517 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.517 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:35.517 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:35.517 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:35.517 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.517 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.517 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:35.517 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.517 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:35.517 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:35.517 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:35.517 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:35.517 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.517 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.776 nvme0n1 00:26:35.776 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.776 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.776 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.776 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.776 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.776 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.776 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.776 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.776 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.776 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.776 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.776 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.776 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:35.776 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.776 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:35.776 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:35.776 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:35.776 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2E3OWU1YzdiNjNlMDczNTdjZTQ4MDQzMjQxODJkYzA0ZDAxZDRiMWNlMjdmZDlmGMOMLQ==: 00:26:35.776 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWY4YTU2NjI2YzJlZmE0NGZiOThiMzFiMjNiNmVlMjS50+T6: 00:26:35.776 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:35.776 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:35.776 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2E3OWU1YzdiNjNlMDczNTdjZTQ4MDQzMjQxODJkYzA0ZDAxZDRiMWNlMjdmZDlmGMOMLQ==: 00:26:35.776 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWY4YTU2NjI2YzJlZmE0NGZiOThiMzFiMjNiNmVlMjS50+T6: ]] 00:26:35.776 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWY4YTU2NjI2YzJlZmE0NGZiOThiMzFiMjNiNmVlMjS50+T6: 00:26:35.776 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:35.776 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.776 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:35.776 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:35.776 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:35.776 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.776 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:35.776 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.776 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.776 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.776 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.776 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:35.776 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:35.776 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:35.776 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.776 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.776 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:35.776 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.776 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:35.776 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:35.776 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:35.776 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:35.776 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.776 22:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.035 nvme0n1 00:26:36.035 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.035 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.035 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.035 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.035 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.035 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.294 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.294 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.294 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.294 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.294 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.294 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.294 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:36.294 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.294 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:36.294 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:36.294 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:36.294 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmZjY2U2OTVhM2I5ZDU2ZTI0NzFhNmM0OWFlY2M2NmY4ODVlMGVlOTdlMTkzZDRkOWY3ZDNmZDIxMTcwNmM1YQsmRTo=: 00:26:36.294 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:36.294 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:36.294 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:36.294 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmZjY2U2OTVhM2I5ZDU2ZTI0NzFhNmM0OWFlY2M2NmY4ODVlMGVlOTdlMTkzZDRkOWY3ZDNmZDIxMTcwNmM1YQsmRTo=: 00:26:36.294 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:36.294 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:36.294 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.294 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:36.294 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:36.294 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:36.294 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.294 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:36.294 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.294 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.294 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.294 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.294 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:36.294 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:36.294 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:36.294 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.294 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.294 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:36.294 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.294 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:36.294 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:36.294 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:36.294 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:36.294 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.294 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.553 nvme0n1 00:26:36.553 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.553 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.553 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.553 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.553 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.553 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.553 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.553 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.553 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.553 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.553 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.553 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:36.553 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.553 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:36.553 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.553 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:36.553 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:36.553 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:36.553 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQyN2M2NWQ5MTRmMzAzMDIxMDdmMTAyYmM2Mjk2YzZIzIp7: 00:26:36.553 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWZlZDJiOTc3MTQ3MTA3NjBlN2Y1ZjllYTg4NTY1ODFjNjliZTBlYWJmMzM4ODYxZGEyNjg3Y2FiZjJhMWFkZQHnoVw=: 00:26:36.553 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:36.553 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:36.553 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQyN2M2NWQ5MTRmMzAzMDIxMDdmMTAyYmM2Mjk2YzZIzIp7: 00:26:36.553 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWZlZDJiOTc3MTQ3MTA3NjBlN2Y1ZjllYTg4NTY1ODFjNjliZTBlYWJmMzM4ODYxZGEyNjg3Y2FiZjJhMWFkZQHnoVw=: ]] 00:26:36.553 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWZlZDJiOTc3MTQ3MTA3NjBlN2Y1ZjllYTg4NTY1ODFjNjliZTBlYWJmMzM4ODYxZGEyNjg3Y2FiZjJhMWFkZQHnoVw=: 00:26:36.553 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:36.553 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.553 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:36.553 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:36.553 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:36.553 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.554 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:36.554 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.554 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.554 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.554 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.554 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:36.554 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:36.554 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:36.554 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.554 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.554 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:36.554 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.554 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:36.554 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:36.554 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:36.554 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:36.554 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.554 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.812 nvme0n1 00:26:36.812 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.812 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.812 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.812 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.812 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.812 22:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.812 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.812 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.812 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.812 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.071 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.071 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.071 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:37.071 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.071 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:37.071 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:37.071 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:37.071 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDA3ZjkyOWUzZTRjYWViMGFkNDUxMjMxNzcxNDY0Y2RhZWUxZTE4N2ExNDcwMmY04e9xWw==: 00:26:37.071 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: 00:26:37.071 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:37.071 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:37.071 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDA3ZjkyOWUzZTRjYWViMGFkNDUxMjMxNzcxNDY0Y2RhZWUxZTE4N2ExNDcwMmY04e9xWw==: 00:26:37.071 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: ]] 00:26:37.071 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: 00:26:37.071 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:26:37.071 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.071 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:37.071 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:37.071 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:37.071 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.071 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:37.071 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.071 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.071 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.071 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.071 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:37.071 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:37.071 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:37.071 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.071 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.071 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:37.071 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.071 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:37.071 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:37.071 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:37.071 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:37.071 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.071 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.329 nvme0n1 00:26:37.329 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.329 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.329 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.329 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.329 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.329 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.329 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.329 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.329 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.329 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.329 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.329 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.329 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:37.329 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.329 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:37.329 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:37.329 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:37.329 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTBlNmY4ODUxNzI5Yzc0MjJhYzEwYzllNzZhM2RjNWUgEmnl: 00:26:37.329 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjU2MWNkMGM1ZDIzNDhjMTVlOGQ4MWJjNDBmODUxYmOJuPFR: 00:26:37.329 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:37.329 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:37.329 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTBlNmY4ODUxNzI5Yzc0MjJhYzEwYzllNzZhM2RjNWUgEmnl: 00:26:37.329 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjU2MWNkMGM1ZDIzNDhjMTVlOGQ4MWJjNDBmODUxYmOJuPFR: ]] 00:26:37.329 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjU2MWNkMGM1ZDIzNDhjMTVlOGQ4MWJjNDBmODUxYmOJuPFR: 00:26:37.329 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:26:37.329 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.329 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:37.329 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:37.329 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:37.329 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.329 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:37.329 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.329 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.329 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.329 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.329 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:37.329 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:37.329 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:37.329 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.329 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.329 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:37.329 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.329 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:37.329 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:37.329 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:37.329 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:37.329 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.329 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.896 nvme0n1 00:26:37.896 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.897 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.897 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.897 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.897 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.897 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.897 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.897 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.897 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.897 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.897 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.897 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.897 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:37.897 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.897 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:37.897 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:37.897 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:37.897 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2E3OWU1YzdiNjNlMDczNTdjZTQ4MDQzMjQxODJkYzA0ZDAxZDRiMWNlMjdmZDlmGMOMLQ==: 00:26:37.897 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWY4YTU2NjI2YzJlZmE0NGZiOThiMzFiMjNiNmVlMjS50+T6: 00:26:37.897 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:37.897 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:37.897 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2E3OWU1YzdiNjNlMDczNTdjZTQ4MDQzMjQxODJkYzA0ZDAxZDRiMWNlMjdmZDlmGMOMLQ==: 00:26:37.897 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWY4YTU2NjI2YzJlZmE0NGZiOThiMzFiMjNiNmVlMjS50+T6: ]] 00:26:37.897 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWY4YTU2NjI2YzJlZmE0NGZiOThiMzFiMjNiNmVlMjS50+T6: 00:26:37.897 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:37.897 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.897 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:37.897 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:37.897 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:37.897 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.897 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:37.897 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.897 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.897 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.897 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.897 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:37.897 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:37.897 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:37.897 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.897 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.897 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:37.897 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.897 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:37.897 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:37.897 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:37.897 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:37.897 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.897 22:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.157 nvme0n1 00:26:38.157 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.157 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.157 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.157 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.157 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.157 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.157 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.157 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.157 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.157 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.416 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.416 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.416 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:38.416 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.416 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:38.416 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:38.416 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:38.416 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmZjY2U2OTVhM2I5ZDU2ZTI0NzFhNmM0OWFlY2M2NmY4ODVlMGVlOTdlMTkzZDRkOWY3ZDNmZDIxMTcwNmM1YQsmRTo=: 00:26:38.416 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:38.416 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:38.416 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:38.416 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmZjY2U2OTVhM2I5ZDU2ZTI0NzFhNmM0OWFlY2M2NmY4ODVlMGVlOTdlMTkzZDRkOWY3ZDNmZDIxMTcwNmM1YQsmRTo=: 00:26:38.416 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:38.416 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:38.416 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.416 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:38.416 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:38.416 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:38.416 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.416 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:38.416 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.416 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.416 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.416 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.416 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:38.416 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:38.416 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:38.416 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.416 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.416 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:38.416 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.416 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:38.416 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:38.416 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:38.416 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:38.416 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.417 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.676 nvme0n1 00:26:38.676 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.676 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.676 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.676 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.676 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.676 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.676 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.676 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.676 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.676 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.676 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.676 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:38.676 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.676 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:38.676 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.676 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:38.676 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:38.676 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:38.676 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQyN2M2NWQ5MTRmMzAzMDIxMDdmMTAyYmM2Mjk2YzZIzIp7: 00:26:38.676 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWZlZDJiOTc3MTQ3MTA3NjBlN2Y1ZjllYTg4NTY1ODFjNjliZTBlYWJmMzM4ODYxZGEyNjg3Y2FiZjJhMWFkZQHnoVw=: 00:26:38.676 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:38.676 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:38.676 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQyN2M2NWQ5MTRmMzAzMDIxMDdmMTAyYmM2Mjk2YzZIzIp7: 00:26:38.676 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWZlZDJiOTc3MTQ3MTA3NjBlN2Y1ZjllYTg4NTY1ODFjNjliZTBlYWJmMzM4ODYxZGEyNjg3Y2FiZjJhMWFkZQHnoVw=: ]] 00:26:38.676 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWZlZDJiOTc3MTQ3MTA3NjBlN2Y1ZjllYTg4NTY1ODFjNjliZTBlYWJmMzM4ODYxZGEyNjg3Y2FiZjJhMWFkZQHnoVw=: 00:26:38.676 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:38.676 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.676 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:38.676 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:38.676 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:38.676 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.676 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:38.676 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.676 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.676 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.676 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.676 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:38.676 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:38.676 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:38.676 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.676 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.676 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:38.676 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.677 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:38.677 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:38.677 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:38.677 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:38.677 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.677 22:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.245 nvme0n1 00:26:39.245 22:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.245 22:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.245 22:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.245 22:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.245 22:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.245 22:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.245 22:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.245 22:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.245 22:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.245 22:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.245 22:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.245 22:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.245 22:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:39.245 22:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.245 22:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:39.245 22:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:39.245 22:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:39.245 22:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDA3ZjkyOWUzZTRjYWViMGFkNDUxMjMxNzcxNDY0Y2RhZWUxZTE4N2ExNDcwMmY04e9xWw==: 00:26:39.245 22:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: 00:26:39.245 22:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:39.245 22:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:39.245 22:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDA3ZjkyOWUzZTRjYWViMGFkNDUxMjMxNzcxNDY0Y2RhZWUxZTE4N2ExNDcwMmY04e9xWw==: 00:26:39.245 22:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: ]] 00:26:39.245 22:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: 00:26:39.245 22:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:39.245 22:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.504 22:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:39.504 22:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:39.504 22:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:39.504 22:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.504 22:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:39.504 22:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.504 22:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.504 22:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.504 22:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.504 22:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:39.504 22:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:39.504 22:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:39.504 22:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.504 22:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.504 22:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:39.505 22:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.505 22:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:39.505 22:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:39.505 22:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:39.505 22:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:39.505 22:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.505 22:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.154 nvme0n1 00:26:40.154 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.155 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.155 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.155 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.155 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.155 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.155 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.155 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.155 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.155 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.155 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.155 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:40.155 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:40.155 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:40.155 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:40.155 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:40.155 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:40.155 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTBlNmY4ODUxNzI5Yzc0MjJhYzEwYzllNzZhM2RjNWUgEmnl: 00:26:40.155 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjU2MWNkMGM1ZDIzNDhjMTVlOGQ4MWJjNDBmODUxYmOJuPFR: 00:26:40.155 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:40.155 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:40.155 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTBlNmY4ODUxNzI5Yzc0MjJhYzEwYzllNzZhM2RjNWUgEmnl: 00:26:40.155 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjU2MWNkMGM1ZDIzNDhjMTVlOGQ4MWJjNDBmODUxYmOJuPFR: ]] 00:26:40.155 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjU2MWNkMGM1ZDIzNDhjMTVlOGQ4MWJjNDBmODUxYmOJuPFR: 00:26:40.155 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:40.155 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:40.155 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:40.155 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:40.155 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:40.155 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:40.155 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:40.155 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.155 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.155 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.155 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.155 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:40.155 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:40.155 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:40.155 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.155 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.155 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:40.155 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.155 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:40.155 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:40.155 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:40.155 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:40.155 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.155 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.725 nvme0n1 00:26:40.725 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.725 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.725 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.725 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.725 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.725 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.725 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.725 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.725 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.725 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.725 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.725 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:40.725 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:40.725 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:40.725 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:40.725 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:40.725 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:40.725 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2E3OWU1YzdiNjNlMDczNTdjZTQ4MDQzMjQxODJkYzA0ZDAxZDRiMWNlMjdmZDlmGMOMLQ==: 00:26:40.725 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWY4YTU2NjI2YzJlZmE0NGZiOThiMzFiMjNiNmVlMjS50+T6: 00:26:40.725 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:40.725 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:40.725 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2E3OWU1YzdiNjNlMDczNTdjZTQ4MDQzMjQxODJkYzA0ZDAxZDRiMWNlMjdmZDlmGMOMLQ==: 00:26:40.725 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWY4YTU2NjI2YzJlZmE0NGZiOThiMzFiMjNiNmVlMjS50+T6: ]] 00:26:40.725 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWY4YTU2NjI2YzJlZmE0NGZiOThiMzFiMjNiNmVlMjS50+T6: 00:26:40.725 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:40.725 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:40.725 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:40.725 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:40.725 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:40.725 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:40.725 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:40.725 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.725 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.725 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.725 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.725 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:40.725 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:40.725 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:40.725 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.725 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.725 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:40.725 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.725 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:40.725 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:40.725 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:40.725 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:40.725 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.725 22:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.295 nvme0n1 00:26:41.295 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.295 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.295 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:41.295 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.295 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.295 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.295 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.295 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.295 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.295 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.295 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.295 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:41.295 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:41.295 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.295 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:41.295 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:41.295 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:41.295 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmZjY2U2OTVhM2I5ZDU2ZTI0NzFhNmM0OWFlY2M2NmY4ODVlMGVlOTdlMTkzZDRkOWY3ZDNmZDIxMTcwNmM1YQsmRTo=: 00:26:41.295 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:41.295 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:41.295 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:41.295 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmZjY2U2OTVhM2I5ZDU2ZTI0NzFhNmM0OWFlY2M2NmY4ODVlMGVlOTdlMTkzZDRkOWY3ZDNmZDIxMTcwNmM1YQsmRTo=: 00:26:41.295 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:41.295 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:41.295 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.295 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:41.295 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:41.295 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:41.295 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.295 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:41.295 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.295 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.295 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.295 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.295 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:41.295 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:41.295 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:41.295 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.295 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.295 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:41.295 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.295 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:41.295 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:41.295 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:41.295 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:41.295 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.295 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.865 nvme0n1 00:26:41.865 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.865 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.865 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.865 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:41.865 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.865 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.865 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.865 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.865 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.865 22:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.865 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.865 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:41.865 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.865 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:41.865 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:41.865 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:41.865 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDA3ZjkyOWUzZTRjYWViMGFkNDUxMjMxNzcxNDY0Y2RhZWUxZTE4N2ExNDcwMmY04e9xWw==: 00:26:41.865 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: 00:26:41.865 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:41.865 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:41.865 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDA3ZjkyOWUzZTRjYWViMGFkNDUxMjMxNzcxNDY0Y2RhZWUxZTE4N2ExNDcwMmY04e9xWw==: 00:26:41.865 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: ]] 00:26:41.865 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5MTY5ZTFjYzI5ZGIyYzU5NjExNjgwMzhjMDA4Njg5NGExMzY4ODlhMDJlNzFhNcsZlA==: 00:26:41.865 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:41.865 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.865 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.865 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.865 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:41.865 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:41.865 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:41.865 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:41.865 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.865 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.865 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:41.865 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.865 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:41.865 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:41.865 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:41.865 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:41.865 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:41.865 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:41.865 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:41.865 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:41.865 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:41.865 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:41.865 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:41.865 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.865 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.125 request: 00:26:42.125 { 00:26:42.125 "name": "nvme0", 00:26:42.125 "trtype": "tcp", 00:26:42.125 "traddr": "10.0.0.1", 00:26:42.125 "adrfam": "ipv4", 00:26:42.125 "trsvcid": "4420", 00:26:42.125 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:42.125 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:42.125 "prchk_reftag": false, 00:26:42.125 "prchk_guard": false, 00:26:42.125 "hdgst": false, 00:26:42.125 "ddgst": false, 00:26:42.126 "method": "bdev_nvme_attach_controller", 00:26:42.126 "req_id": 1 00:26:42.126 } 00:26:42.126 Got JSON-RPC error response 00:26:42.126 response: 00:26:42.126 { 00:26:42.126 "code": -5, 00:26:42.126 "message": "Input/output error" 00:26:42.126 } 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.126 request: 00:26:42.126 { 00:26:42.126 "name": "nvme0", 00:26:42.126 "trtype": "tcp", 00:26:42.126 "traddr": "10.0.0.1", 00:26:42.126 "adrfam": "ipv4", 00:26:42.126 "trsvcid": "4420", 00:26:42.126 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:42.126 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:42.126 "prchk_reftag": false, 00:26:42.126 "prchk_guard": false, 00:26:42.126 "hdgst": false, 00:26:42.126 "ddgst": false, 00:26:42.126 "dhchap_key": "key2", 00:26:42.126 "method": "bdev_nvme_attach_controller", 00:26:42.126 "req_id": 1 00:26:42.126 } 00:26:42.126 Got JSON-RPC error response 00:26:42.126 response: 00:26:42.126 { 00:26:42.126 "code": -5, 00:26:42.126 "message": "Input/output error" 00:26:42.126 } 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:42.126 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:42.127 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:42.127 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:42.127 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:42.127 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.127 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.127 request: 00:26:42.127 { 00:26:42.127 "name": "nvme0", 00:26:42.127 "trtype": "tcp", 00:26:42.127 "traddr": "10.0.0.1", 00:26:42.127 "adrfam": "ipv4", 00:26:42.127 "trsvcid": "4420", 00:26:42.127 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:42.127 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:42.127 "prchk_reftag": false, 00:26:42.127 "prchk_guard": false, 00:26:42.127 "hdgst": false, 00:26:42.127 "ddgst": false, 00:26:42.127 "dhchap_key": "key1", 00:26:42.127 "dhchap_ctrlr_key": "ckey2", 00:26:42.127 "method": "bdev_nvme_attach_controller", 00:26:42.127 "req_id": 1 00:26:42.127 } 00:26:42.127 Got JSON-RPC error response 00:26:42.127 response: 00:26:42.127 { 00:26:42.127 "code": -5, 00:26:42.127 "message": "Input/output error" 00:26:42.127 } 00:26:42.127 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:42.127 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:42.127 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:42.127 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:42.127 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:42.127 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:26:42.127 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:26:42.127 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:42.127 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:42.127 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:26:42.127 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:42.127 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:26:42.127 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:42.127 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:42.127 rmmod nvme_tcp 00:26:42.387 rmmod nvme_fabrics 00:26:42.387 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:42.387 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:26:42.387 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:26:42.387 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2822372 ']' 00:26:42.387 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2822372 00:26:42.387 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 2822372 ']' 00:26:42.387 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 2822372 00:26:42.387 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:26:42.387 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:42.387 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2822372 00:26:42.387 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:42.387 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:42.387 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2822372' 00:26:42.387 killing process with pid 2822372 00:26:42.387 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 2822372 00:26:42.387 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 2822372 00:26:42.387 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:42.387 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:42.387 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:42.387 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:42.387 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:42.387 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:42.387 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:42.387 22:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:44.928 22:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:44.928 22:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:44.928 22:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:44.928 22:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:44.928 22:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:44.928 22:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:26:44.928 22:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:44.928 22:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:44.928 22:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:44.928 22:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:44.928 22:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:44.928 22:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:44.928 22:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:47.466 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:47.466 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:47.466 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:47.466 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:47.466 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:47.466 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:47.466 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:47.466 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:47.466 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:47.466 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:47.466 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:47.466 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:47.466 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:47.466 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:47.466 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:47.466 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:49.375 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:26:49.375 22:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.QT9 /tmp/spdk.key-null.s1a /tmp/spdk.key-sha256.0E9 /tmp/spdk.key-sha384.cPw /tmp/spdk.key-sha512.GOx /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:26:49.375 22:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:51.914 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:26:51.914 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:26:51.914 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:26:51.914 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:26:51.914 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:26:51.914 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:26:51.914 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:26:51.914 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:26:51.914 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:26:51.914 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:26:51.914 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:26:51.914 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:26:51.914 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:26:51.914 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:26:52.173 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:26:52.173 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:26:52.173 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:52.173 00:26:52.173 real 0m52.692s 00:26:52.173 user 0m45.518s 00:26:52.173 sys 0m14.574s 00:26:52.173 22:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:52.173 22:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.173 ************************************ 00:26:52.173 END TEST nvmf_auth_host 00:26:52.173 ************************************ 00:26:52.173 22:14:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:26:52.173 22:14:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:52.174 22:14:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:52.174 22:14:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:52.174 22:14:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.174 ************************************ 00:26:52.174 START TEST nvmf_digest 00:26:52.174 ************************************ 00:26:52.174 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:52.434 * Looking for test storage... 00:26:52.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:26:52.434 22:14:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:59.006 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:59.006 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:26:59.006 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:59.006 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:59.006 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:59.006 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:59.006 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:59.006 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:26:59.006 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:59.006 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:26:59.006 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:26:59.006 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:26:59.006 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:26:59.006 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:26:59.006 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:26:59.006 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:59.006 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:59.006 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:59.006 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:59.006 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:59.006 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:59.006 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:59.006 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:59.006 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:59.006 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:59.006 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:59.006 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:59.006 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:59.006 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:59.006 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:59.006 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:59.006 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:59.006 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:59.006 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:59.006 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:59.006 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:59.006 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:59.006 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:59.006 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:59.007 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:59.007 Found net devices under 0000:af:00.0: cvl_0_0 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:59.007 Found net devices under 0000:af:00.1: cvl_0_1 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:59.007 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:59.007 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:26:59.007 00:26:59.007 --- 10.0.0.2 ping statistics --- 00:26:59.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:59.007 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:59.007 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:59.007 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:26:59.007 00:26:59.007 --- 10.0.0.1 ping statistics --- 00:26:59.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:59.007 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:59.007 ************************************ 00:26:59.007 START TEST nvmf_digest_clean 00:26:59.007 ************************************ 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=2835949 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 2835949 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2835949 ']' 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:59.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:59.007 22:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:59.007 [2024-07-24 22:14:37.782117] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:26:59.008 [2024-07-24 22:14:37.782163] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:59.008 EAL: No free 2048 kB hugepages reported on node 1 00:26:59.008 [2024-07-24 22:14:37.857154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.008 [2024-07-24 22:14:37.928828] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:59.008 [2024-07-24 22:14:37.928866] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:59.008 [2024-07-24 22:14:37.928876] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:59.008 [2024-07-24 22:14:37.928884] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:59.008 [2024-07-24 22:14:37.928891] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:59.008 [2024-07-24 22:14:37.928910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.577 22:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:59.577 22:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:26:59.577 22:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:59.577 22:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:59.577 22:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:59.577 22:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:59.577 22:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:59.577 22:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:59.577 22:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:59.577 22:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.577 22:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:59.577 null0 00:26:59.577 [2024-07-24 22:14:38.680325] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:59.577 [2024-07-24 22:14:38.704502] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:59.577 22:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.577 22:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:59.577 22:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:59.577 22:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:59.577 22:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:59.577 22:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:59.577 22:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:59.577 22:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:59.577 22:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2836223 00:26:59.577 22:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2836223 /var/tmp/bperf.sock 00:26:59.577 22:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2836223 ']' 00:26:59.577 22:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:59.577 22:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:59.577 22:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:59.577 22:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:59.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:59.577 22:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:59.577 22:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:59.577 [2024-07-24 22:14:38.741905] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:26:59.577 [2024-07-24 22:14:38.741950] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2836223 ] 00:26:59.577 EAL: No free 2048 kB hugepages reported on node 1 00:26:59.837 [2024-07-24 22:14:38.807781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.837 [2024-07-24 22:14:38.875215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:00.405 22:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:00.405 22:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:00.405 22:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:00.405 22:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:00.405 22:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:00.718 22:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:00.718 22:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:00.977 nvme0n1 00:27:00.977 22:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:00.977 22:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:00.977 Running I/O for 2 seconds... 00:27:03.537 00:27:03.537 Latency(us) 00:27:03.537 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:03.537 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:03.537 nvme0n1 : 2.04 27737.27 108.35 0.00 0.00 4538.45 2031.62 44249.91 00:27:03.537 =================================================================================================================== 00:27:03.537 Total : 27737.27 108.35 0.00 0.00 4538.45 2031.62 44249.91 00:27:03.537 0 00:27:03.537 22:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:03.537 22:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:03.537 22:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:03.537 22:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:03.537 22:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:03.537 | select(.opcode=="crc32c") 00:27:03.537 | "\(.module_name) \(.executed)"' 00:27:03.537 22:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:03.537 22:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:03.537 22:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:03.537 22:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:03.537 22:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2836223 00:27:03.537 22:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2836223 ']' 00:27:03.537 22:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2836223 00:27:03.537 22:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:27:03.537 22:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:03.537 22:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2836223 00:27:03.537 22:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:03.537 22:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:03.537 22:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2836223' 00:27:03.537 killing process with pid 2836223 00:27:03.537 22:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2836223 00:27:03.537 Received shutdown signal, test time was about 2.000000 seconds 00:27:03.537 00:27:03.537 Latency(us) 00:27:03.537 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:03.537 =================================================================================================================== 00:27:03.537 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:03.538 22:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2836223 00:27:03.538 22:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:27:03.538 22:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:03.538 22:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:03.538 22:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:03.538 22:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:03.538 22:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:03.538 22:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:03.538 22:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2836784 00:27:03.538 22:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2836784 /var/tmp/bperf.sock 00:27:03.538 22:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:03.538 22:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2836784 ']' 00:27:03.538 22:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:03.538 22:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:03.538 22:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:03.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:03.538 22:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:03.538 22:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:03.538 [2024-07-24 22:14:42.669321] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:27:03.538 [2024-07-24 22:14:42.669384] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2836784 ] 00:27:03.538 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:03.538 Zero copy mechanism will not be used. 00:27:03.538 EAL: No free 2048 kB hugepages reported on node 1 00:27:03.538 [2024-07-24 22:14:42.739256] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:03.798 [2024-07-24 22:14:42.814627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:04.378 22:14:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:04.378 22:14:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:04.378 22:14:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:04.378 22:14:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:04.378 22:14:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:04.643 22:14:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:04.643 22:14:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:04.902 nvme0n1 00:27:05.162 22:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:05.162 22:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:05.162 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:05.162 Zero copy mechanism will not be used. 00:27:05.162 Running I/O for 2 seconds... 00:27:07.069 00:27:07.069 Latency(us) 00:27:07.069 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:07.069 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:07.069 nvme0n1 : 2.00 3727.40 465.92 0.00 0.00 4290.43 937.16 16148.07 00:27:07.069 =================================================================================================================== 00:27:07.069 Total : 3727.40 465.92 0.00 0.00 4290.43 937.16 16148.07 00:27:07.069 0 00:27:07.069 22:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:07.069 22:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:07.069 22:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:07.069 22:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:07.069 | select(.opcode=="crc32c") 00:27:07.069 | "\(.module_name) \(.executed)"' 00:27:07.069 22:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:07.328 22:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:07.328 22:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:07.328 22:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:07.328 22:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:07.328 22:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2836784 00:27:07.328 22:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2836784 ']' 00:27:07.328 22:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2836784 00:27:07.328 22:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:27:07.328 22:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:07.328 22:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2836784 00:27:07.328 22:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:07.328 22:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:07.328 22:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2836784' 00:27:07.328 killing process with pid 2836784 00:27:07.328 22:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2836784 00:27:07.328 Received shutdown signal, test time was about 2.000000 seconds 00:27:07.328 00:27:07.328 Latency(us) 00:27:07.328 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:07.328 =================================================================================================================== 00:27:07.329 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:07.329 22:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2836784 00:27:07.588 22:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:27:07.588 22:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:07.588 22:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:07.588 22:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:07.588 22:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:07.588 22:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:07.588 22:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:07.588 22:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2837566 00:27:07.588 22:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2837566 /var/tmp/bperf.sock 00:27:07.588 22:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:07.588 22:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2837566 ']' 00:27:07.588 22:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:07.588 22:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:07.588 22:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:07.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:07.588 22:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:07.588 22:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:07.588 [2024-07-24 22:14:46.700887] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:27:07.588 [2024-07-24 22:14:46.700949] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2837566 ] 00:27:07.588 EAL: No free 2048 kB hugepages reported on node 1 00:27:07.588 [2024-07-24 22:14:46.770769] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.847 [2024-07-24 22:14:46.844675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:08.415 22:14:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:08.415 22:14:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:08.415 22:14:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:08.415 22:14:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:08.415 22:14:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:08.675 22:14:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:08.675 22:14:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:08.933 nvme0n1 00:27:09.192 22:14:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:09.192 22:14:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:09.192 Running I/O for 2 seconds... 00:27:11.097 00:27:11.097 Latency(us) 00:27:11.097 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:11.097 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:11.097 nvme0n1 : 2.00 28360.77 110.78 0.00 0.00 4505.54 2018.51 7392.46 00:27:11.097 =================================================================================================================== 00:27:11.097 Total : 28360.77 110.78 0.00 0.00 4505.54 2018.51 7392.46 00:27:11.097 0 00:27:11.097 22:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:11.097 22:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:11.097 22:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:11.097 22:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:11.097 22:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:11.097 | select(.opcode=="crc32c") 00:27:11.097 | "\(.module_name) \(.executed)"' 00:27:11.356 22:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:11.356 22:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:11.356 22:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:11.356 22:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:11.356 22:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2837566 00:27:11.356 22:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2837566 ']' 00:27:11.356 22:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2837566 00:27:11.356 22:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:27:11.357 22:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:11.357 22:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2837566 00:27:11.357 22:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:11.357 22:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:11.357 22:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2837566' 00:27:11.357 killing process with pid 2837566 00:27:11.357 22:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2837566 00:27:11.357 Received shutdown signal, test time was about 2.000000 seconds 00:27:11.357 00:27:11.357 Latency(us) 00:27:11.357 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:11.357 =================================================================================================================== 00:27:11.357 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:11.357 22:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2837566 00:27:11.616 22:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:27:11.616 22:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:11.616 22:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:11.616 22:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:11.616 22:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:11.616 22:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:11.616 22:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:11.616 22:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2838142 00:27:11.616 22:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2838142 /var/tmp/bperf.sock 00:27:11.616 22:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:11.616 22:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2838142 ']' 00:27:11.616 22:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:11.616 22:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:11.616 22:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:11.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:11.616 22:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:11.616 22:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:11.616 [2024-07-24 22:14:50.717991] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:27:11.616 [2024-07-24 22:14:50.718046] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2838142 ] 00:27:11.616 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:11.616 Zero copy mechanism will not be used. 00:27:11.616 EAL: No free 2048 kB hugepages reported on node 1 00:27:11.616 [2024-07-24 22:14:50.790102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.875 [2024-07-24 22:14:50.858765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:12.443 22:14:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:12.443 22:14:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:12.443 22:14:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:12.443 22:14:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:12.443 22:14:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:12.702 22:14:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:12.702 22:14:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:12.961 nvme0n1 00:27:12.961 22:14:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:12.961 22:14:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:12.961 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:12.961 Zero copy mechanism will not be used. 00:27:12.961 Running I/O for 2 seconds... 00:27:14.875 00:27:14.875 Latency(us) 00:27:14.875 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:14.875 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:14.875 nvme0n1 : 2.00 4997.02 624.63 0.00 0.00 3197.55 2319.97 20342.37 00:27:14.875 =================================================================================================================== 00:27:14.875 Total : 4997.02 624.63 0.00 0.00 3197.55 2319.97 20342.37 00:27:14.875 0 00:27:14.875 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:14.875 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:14.875 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:14.875 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:14.875 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:14.875 | select(.opcode=="crc32c") 00:27:14.875 | "\(.module_name) \(.executed)"' 00:27:15.134 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:15.134 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:15.134 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:15.134 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:15.134 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2838142 00:27:15.134 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2838142 ']' 00:27:15.134 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2838142 00:27:15.134 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:27:15.134 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:15.134 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2838142 00:27:15.134 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:15.134 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:15.134 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2838142' 00:27:15.134 killing process with pid 2838142 00:27:15.134 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2838142 00:27:15.134 Received shutdown signal, test time was about 2.000000 seconds 00:27:15.134 00:27:15.134 Latency(us) 00:27:15.134 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:15.134 =================================================================================================================== 00:27:15.134 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:15.134 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2838142 00:27:15.394 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2835949 00:27:15.394 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2835949 ']' 00:27:15.394 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2835949 00:27:15.394 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:27:15.394 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:15.394 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2835949 00:27:15.394 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:15.394 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:15.394 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2835949' 00:27:15.394 killing process with pid 2835949 00:27:15.394 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2835949 00:27:15.394 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2835949 00:27:15.653 00:27:15.653 real 0m17.001s 00:27:15.653 user 0m31.902s 00:27:15.653 sys 0m5.110s 00:27:15.653 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:15.653 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:15.653 ************************************ 00:27:15.653 END TEST nvmf_digest_clean 00:27:15.653 ************************************ 00:27:15.653 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:15.653 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:15.653 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:15.653 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:15.653 ************************************ 00:27:15.653 START TEST nvmf_digest_error 00:27:15.653 ************************************ 00:27:15.654 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:27:15.654 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:15.654 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:15.654 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:15.654 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:15.654 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=2838933 00:27:15.654 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 2838933 00:27:15.654 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:15.654 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2838933 ']' 00:27:15.654 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:15.654 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:15.654 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:15.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:15.654 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:15.654 22:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:15.913 [2024-07-24 22:14:54.869820] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:27:15.913 [2024-07-24 22:14:54.869880] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:15.913 EAL: No free 2048 kB hugepages reported on node 1 00:27:15.913 [2024-07-24 22:14:54.942631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.913 [2024-07-24 22:14:55.013983] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:15.913 [2024-07-24 22:14:55.014020] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:15.913 [2024-07-24 22:14:55.014029] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:15.913 [2024-07-24 22:14:55.014037] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:15.913 [2024-07-24 22:14:55.014044] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:15.913 [2024-07-24 22:14:55.014064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:16.547 22:14:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:16.547 22:14:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:27:16.547 22:14:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:16.547 22:14:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:16.547 22:14:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:16.547 22:14:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:16.547 22:14:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:16.547 22:14:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.547 22:14:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:16.547 [2024-07-24 22:14:55.712111] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:16.547 22:14:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.547 22:14:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:27:16.547 22:14:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:27:16.547 22:14:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.547 22:14:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:16.807 null0 00:27:16.807 [2024-07-24 22:14:55.800516] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:16.807 [2024-07-24 22:14:55.824705] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:16.807 22:14:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.807 22:14:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:27:16.807 22:14:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:16.807 22:14:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:16.807 22:14:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:16.807 22:14:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:16.807 22:14:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2839187 00:27:16.807 22:14:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2839187 /var/tmp/bperf.sock 00:27:16.807 22:14:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:16.807 22:14:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2839187 ']' 00:27:16.807 22:14:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:16.807 22:14:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:16.807 22:14:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:16.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:16.807 22:14:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:16.807 22:14:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:16.807 [2024-07-24 22:14:55.876814] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:27:16.807 [2024-07-24 22:14:55.876871] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2839187 ] 00:27:16.807 EAL: No free 2048 kB hugepages reported on node 1 00:27:16.807 [2024-07-24 22:14:55.947755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:17.066 [2024-07-24 22:14:56.021472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:17.634 22:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:17.634 22:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:27:17.634 22:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:17.634 22:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:17.893 22:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:17.893 22:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.893 22:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:17.893 22:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.893 22:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:17.893 22:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:17.893 nvme0n1 00:27:18.153 22:14:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:18.153 22:14:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.153 22:14:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:18.153 22:14:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.153 22:14:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:18.153 22:14:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:18.153 Running I/O for 2 seconds... 00:27:18.153 [2024-07-24 22:14:57.220337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.153 [2024-07-24 22:14:57.220375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.153 [2024-07-24 22:14:57.220387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.153 [2024-07-24 22:14:57.229148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.154 [2024-07-24 22:14:57.229177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.154 [2024-07-24 22:14:57.229192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.154 [2024-07-24 22:14:57.239158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.154 [2024-07-24 22:14:57.239182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.154 [2024-07-24 22:14:57.239193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.154 [2024-07-24 22:14:57.248332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.154 [2024-07-24 22:14:57.248356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.154 [2024-07-24 22:14:57.248367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.154 [2024-07-24 22:14:57.256749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.154 [2024-07-24 22:14:57.256771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.154 [2024-07-24 22:14:57.256781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.154 [2024-07-24 22:14:57.265366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.154 [2024-07-24 22:14:57.265389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:25497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.154 [2024-07-24 22:14:57.265400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.154 [2024-07-24 22:14:57.274511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.154 [2024-07-24 22:14:57.274534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.154 [2024-07-24 22:14:57.274544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.154 [2024-07-24 22:14:57.283348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.154 [2024-07-24 22:14:57.283370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.154 [2024-07-24 22:14:57.283380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.154 [2024-07-24 22:14:57.293299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.154 [2024-07-24 22:14:57.293322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.154 [2024-07-24 22:14:57.293333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.154 [2024-07-24 22:14:57.300581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.154 [2024-07-24 22:14:57.300603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.154 [2024-07-24 22:14:57.300617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.154 [2024-07-24 22:14:57.309746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.154 [2024-07-24 22:14:57.309768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.154 [2024-07-24 22:14:57.309779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.154 [2024-07-24 22:14:57.319712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.154 [2024-07-24 22:14:57.319740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.154 [2024-07-24 22:14:57.319750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.154 [2024-07-24 22:14:57.327988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.154 [2024-07-24 22:14:57.328010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.154 [2024-07-24 22:14:57.328021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.154 [2024-07-24 22:14:57.336864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.154 [2024-07-24 22:14:57.336886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.154 [2024-07-24 22:14:57.336896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.154 [2024-07-24 22:14:57.346231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.154 [2024-07-24 22:14:57.346253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.154 [2024-07-24 22:14:57.346264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.154 [2024-07-24 22:14:57.356008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.154 [2024-07-24 22:14:57.356029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.154 [2024-07-24 22:14:57.356040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.154 [2024-07-24 22:14:57.364479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.154 [2024-07-24 22:14:57.364501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.154 [2024-07-24 22:14:57.364512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.414 [2024-07-24 22:14:57.373164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.415 [2024-07-24 22:14:57.373186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.415 [2024-07-24 22:14:57.373197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.415 [2024-07-24 22:14:57.382230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.415 [2024-07-24 22:14:57.382252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.415 [2024-07-24 22:14:57.382263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.415 [2024-07-24 22:14:57.391772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.415 [2024-07-24 22:14:57.391793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:11696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.415 [2024-07-24 22:14:57.391804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.415 [2024-07-24 22:14:57.400288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.415 [2024-07-24 22:14:57.400311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.415 [2024-07-24 22:14:57.400321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.415 [2024-07-24 22:14:57.410203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.415 [2024-07-24 22:14:57.410225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.415 [2024-07-24 22:14:57.410236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.415 [2024-07-24 22:14:57.418298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.415 [2024-07-24 22:14:57.418319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.415 [2024-07-24 22:14:57.418330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.415 [2024-07-24 22:14:57.426977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.415 [2024-07-24 22:14:57.426999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.415 [2024-07-24 22:14:57.427009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.415 [2024-07-24 22:14:57.436084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.415 [2024-07-24 22:14:57.436106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.415 [2024-07-24 22:14:57.436117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.415 [2024-07-24 22:14:57.444563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.415 [2024-07-24 22:14:57.444585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.415 [2024-07-24 22:14:57.444596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.415 [2024-07-24 22:14:57.453887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.415 [2024-07-24 22:14:57.453909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.415 [2024-07-24 22:14:57.453923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.415 [2024-07-24 22:14:57.461741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.415 [2024-07-24 22:14:57.461763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.415 [2024-07-24 22:14:57.461773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.415 [2024-07-24 22:14:57.471559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.415 [2024-07-24 22:14:57.471582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.415 [2024-07-24 22:14:57.471593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.415 [2024-07-24 22:14:57.481564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.415 [2024-07-24 22:14:57.481585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.415 [2024-07-24 22:14:57.481596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.415 [2024-07-24 22:14:57.489951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.415 [2024-07-24 22:14:57.489972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.415 [2024-07-24 22:14:57.489982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.415 [2024-07-24 22:14:57.499308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.415 [2024-07-24 22:14:57.499329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.415 [2024-07-24 22:14:57.499340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.415 [2024-07-24 22:14:57.507967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.415 [2024-07-24 22:14:57.507988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:3056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.415 [2024-07-24 22:14:57.507998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.415 [2024-07-24 22:14:57.516508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.415 [2024-07-24 22:14:57.516529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.415 [2024-07-24 22:14:57.516540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.415 [2024-07-24 22:14:57.525151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.415 [2024-07-24 22:14:57.525172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.415 [2024-07-24 22:14:57.525182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.415 [2024-07-24 22:14:57.534548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.415 [2024-07-24 22:14:57.534576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.415 [2024-07-24 22:14:57.534587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.415 [2024-07-24 22:14:57.544362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.415 [2024-07-24 22:14:57.544384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.415 [2024-07-24 22:14:57.544395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.415 [2024-07-24 22:14:57.551624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.415 [2024-07-24 22:14:57.551644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.415 [2024-07-24 22:14:57.551655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.415 [2024-07-24 22:14:57.561339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.415 [2024-07-24 22:14:57.561360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.415 [2024-07-24 22:14:57.561371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.415 [2024-07-24 22:14:57.570300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.415 [2024-07-24 22:14:57.570321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.415 [2024-07-24 22:14:57.570332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.415 [2024-07-24 22:14:57.578975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.415 [2024-07-24 22:14:57.578996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.415 [2024-07-24 22:14:57.579006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.415 [2024-07-24 22:14:57.588195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.415 [2024-07-24 22:14:57.588217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:17655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.415 [2024-07-24 22:14:57.588228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.415 [2024-07-24 22:14:57.596474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.415 [2024-07-24 22:14:57.596494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.415 [2024-07-24 22:14:57.596505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.415 [2024-07-24 22:14:57.606291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.415 [2024-07-24 22:14:57.606313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.415 [2024-07-24 22:14:57.606323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.415 [2024-07-24 22:14:57.616176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.415 [2024-07-24 22:14:57.616197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.415 [2024-07-24 22:14:57.616207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.415 [2024-07-24 22:14:57.623872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.416 [2024-07-24 22:14:57.623893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.416 [2024-07-24 22:14:57.623903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.675 [2024-07-24 22:14:57.633517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.675 [2024-07-24 22:14:57.633539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.675 [2024-07-24 22:14:57.633549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.675 [2024-07-24 22:14:57.642794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.676 [2024-07-24 22:14:57.642815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.676 [2024-07-24 22:14:57.642825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.676 [2024-07-24 22:14:57.651409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.676 [2024-07-24 22:14:57.651430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.676 [2024-07-24 22:14:57.651440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.676 [2024-07-24 22:14:57.661180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.676 [2024-07-24 22:14:57.661202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.676 [2024-07-24 22:14:57.661212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.676 [2024-07-24 22:14:57.670387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.676 [2024-07-24 22:14:57.670408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.676 [2024-07-24 22:14:57.670418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.676 [2024-07-24 22:14:57.677617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.676 [2024-07-24 22:14:57.677638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.676 [2024-07-24 22:14:57.677648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.676 [2024-07-24 22:14:57.688098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.676 [2024-07-24 22:14:57.688119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.676 [2024-07-24 22:14:57.688133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.676 [2024-07-24 22:14:57.696270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.676 [2024-07-24 22:14:57.696292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.676 [2024-07-24 22:14:57.696302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.676 [2024-07-24 22:14:57.705563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.676 [2024-07-24 22:14:57.705583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.676 [2024-07-24 22:14:57.705593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.676 [2024-07-24 22:14:57.714119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.676 [2024-07-24 22:14:57.714140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.676 [2024-07-24 22:14:57.714151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.676 [2024-07-24 22:14:57.723118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.676 [2024-07-24 22:14:57.723139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.676 [2024-07-24 22:14:57.723149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.676 [2024-07-24 22:14:57.732734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.676 [2024-07-24 22:14:57.732755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.676 [2024-07-24 22:14:57.732765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.676 [2024-07-24 22:14:57.740294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.676 [2024-07-24 22:14:57.740314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.676 [2024-07-24 22:14:57.740324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.676 [2024-07-24 22:14:57.749821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.676 [2024-07-24 22:14:57.749841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.676 [2024-07-24 22:14:57.749852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.676 [2024-07-24 22:14:57.758152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.676 [2024-07-24 22:14:57.758172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.676 [2024-07-24 22:14:57.758182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.676 [2024-07-24 22:14:57.767333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.676 [2024-07-24 22:14:57.767354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:13957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.676 [2024-07-24 22:14:57.767364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.676 [2024-07-24 22:14:57.776833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.676 [2024-07-24 22:14:57.776854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:19987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.676 [2024-07-24 22:14:57.776865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.676 [2024-07-24 22:14:57.785437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.676 [2024-07-24 22:14:57.785458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:17013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.676 [2024-07-24 22:14:57.785468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.676 [2024-07-24 22:14:57.794574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.676 [2024-07-24 22:14:57.794595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.676 [2024-07-24 22:14:57.794605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.676 [2024-07-24 22:14:57.802733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.676 [2024-07-24 22:14:57.802754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.676 [2024-07-24 22:14:57.802765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.676 [2024-07-24 22:14:57.812735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.676 [2024-07-24 22:14:57.812756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.676 [2024-07-24 22:14:57.812766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.676 [2024-07-24 22:14:57.821042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.676 [2024-07-24 22:14:57.821063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:17489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.677 [2024-07-24 22:14:57.821073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.677 [2024-07-24 22:14:57.830316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.677 [2024-07-24 22:14:57.830336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.677 [2024-07-24 22:14:57.830347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.677 [2024-07-24 22:14:57.838738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.677 [2024-07-24 22:14:57.838759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:11746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.677 [2024-07-24 22:14:57.838772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.677 [2024-07-24 22:14:57.848663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.677 [2024-07-24 22:14:57.848684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.677 [2024-07-24 22:14:57.848695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.677 [2024-07-24 22:14:57.857417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.677 [2024-07-24 22:14:57.857438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.677 [2024-07-24 22:14:57.857449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.677 [2024-07-24 22:14:57.865674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.677 [2024-07-24 22:14:57.865695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.677 [2024-07-24 22:14:57.865705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.677 [2024-07-24 22:14:57.875493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.677 [2024-07-24 22:14:57.875514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.677 [2024-07-24 22:14:57.875524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.677 [2024-07-24 22:14:57.883561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.677 [2024-07-24 22:14:57.883582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.677 [2024-07-24 22:14:57.883592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.937 [2024-07-24 22:14:57.893449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.937 [2024-07-24 22:14:57.893471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.937 [2024-07-24 22:14:57.893482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.937 [2024-07-24 22:14:57.901905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.937 [2024-07-24 22:14:57.901925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.937 [2024-07-24 22:14:57.901936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.937 [2024-07-24 22:14:57.910540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.937 [2024-07-24 22:14:57.910561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.937 [2024-07-24 22:14:57.910572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.937 [2024-07-24 22:14:57.920202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.937 [2024-07-24 22:14:57.920227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.937 [2024-07-24 22:14:57.920237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.937 [2024-07-24 22:14:57.929470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.937 [2024-07-24 22:14:57.929491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.937 [2024-07-24 22:14:57.929501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.937 [2024-07-24 22:14:57.937713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.937 [2024-07-24 22:14:57.937740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.937 [2024-07-24 22:14:57.937750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.937 [2024-07-24 22:14:57.946508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.937 [2024-07-24 22:14:57.946529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.937 [2024-07-24 22:14:57.946539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.937 [2024-07-24 22:14:57.955701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.937 [2024-07-24 22:14:57.955729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.937 [2024-07-24 22:14:57.955739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.937 [2024-07-24 22:14:57.964598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.937 [2024-07-24 22:14:57.964619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.937 [2024-07-24 22:14:57.964630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.937 [2024-07-24 22:14:57.973474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.937 [2024-07-24 22:14:57.973494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.937 [2024-07-24 22:14:57.973505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.937 [2024-07-24 22:14:57.982008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.937 [2024-07-24 22:14:57.982029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.937 [2024-07-24 22:14:57.982039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.937 [2024-07-24 22:14:57.990913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.938 [2024-07-24 22:14:57.990934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.938 [2024-07-24 22:14:57.990945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.938 [2024-07-24 22:14:58.001014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.938 [2024-07-24 22:14:58.001035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.938 [2024-07-24 22:14:58.001045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.938 [2024-07-24 22:14:58.008884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.938 [2024-07-24 22:14:58.008905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.938 [2024-07-24 22:14:58.008915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.938 [2024-07-24 22:14:58.018140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.938 [2024-07-24 22:14:58.018161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.938 [2024-07-24 22:14:58.018171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.938 [2024-07-24 22:14:58.028019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.938 [2024-07-24 22:14:58.028040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.938 [2024-07-24 22:14:58.028050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.938 [2024-07-24 22:14:58.036694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.938 [2024-07-24 22:14:58.036719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.938 [2024-07-24 22:14:58.036730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.938 [2024-07-24 22:14:58.045371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.938 [2024-07-24 22:14:58.045392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.938 [2024-07-24 22:14:58.045402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.938 [2024-07-24 22:14:58.055094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.938 [2024-07-24 22:14:58.055114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.938 [2024-07-24 22:14:58.055125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.938 [2024-07-24 22:14:58.063124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.938 [2024-07-24 22:14:58.063144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.938 [2024-07-24 22:14:58.063155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.938 [2024-07-24 22:14:58.071971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.938 [2024-07-24 22:14:58.071991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.938 [2024-07-24 22:14:58.072004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.938 [2024-07-24 22:14:58.080990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.938 [2024-07-24 22:14:58.081011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.938 [2024-07-24 22:14:58.081021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.938 [2024-07-24 22:14:58.090403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.938 [2024-07-24 22:14:58.090425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.938 [2024-07-24 22:14:58.090435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.938 [2024-07-24 22:14:58.098444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.938 [2024-07-24 22:14:58.098465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.938 [2024-07-24 22:14:58.098476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.938 [2024-07-24 22:14:58.108034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.938 [2024-07-24 22:14:58.108055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.938 [2024-07-24 22:14:58.108065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.938 [2024-07-24 22:14:58.115610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.938 [2024-07-24 22:14:58.115630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.938 [2024-07-24 22:14:58.115641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.938 [2024-07-24 22:14:58.125361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.938 [2024-07-24 22:14:58.125382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.938 [2024-07-24 22:14:58.125392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.938 [2024-07-24 22:14:58.134399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.938 [2024-07-24 22:14:58.134419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.938 [2024-07-24 22:14:58.134429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.938 [2024-07-24 22:14:58.143209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:18.938 [2024-07-24 22:14:58.143230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.938 [2024-07-24 22:14:58.143241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.199 [2024-07-24 22:14:58.151679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.199 [2024-07-24 22:14:58.151704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.199 [2024-07-24 22:14:58.151730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.199 [2024-07-24 22:14:58.160035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.199 [2024-07-24 22:14:58.160056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.199 [2024-07-24 22:14:58.160067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.199 [2024-07-24 22:14:58.169991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.199 [2024-07-24 22:14:58.170012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.199 [2024-07-24 22:14:58.170022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.199 [2024-07-24 22:14:58.179148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.199 [2024-07-24 22:14:58.179170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:7650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.199 [2024-07-24 22:14:58.179180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.199 [2024-07-24 22:14:58.187413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.199 [2024-07-24 22:14:58.187434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.199 [2024-07-24 22:14:58.187445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.199 [2024-07-24 22:14:58.196340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.199 [2024-07-24 22:14:58.196361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.199 [2024-07-24 22:14:58.196371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.199 [2024-07-24 22:14:58.204469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.199 [2024-07-24 22:14:58.204490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.199 [2024-07-24 22:14:58.204500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.199 [2024-07-24 22:14:58.214726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.199 [2024-07-24 22:14:58.214747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.199 [2024-07-24 22:14:58.214757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.199 [2024-07-24 22:14:58.223930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.199 [2024-07-24 22:14:58.223951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.199 [2024-07-24 22:14:58.223964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.199 [2024-07-24 22:14:58.232305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.199 [2024-07-24 22:14:58.232327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.199 [2024-07-24 22:14:58.232337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.199 [2024-07-24 22:14:58.241833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.199 [2024-07-24 22:14:58.241854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.199 [2024-07-24 22:14:58.241865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.199 [2024-07-24 22:14:58.250167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.199 [2024-07-24 22:14:58.250187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.199 [2024-07-24 22:14:58.250198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.199 [2024-07-24 22:14:58.258897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.200 [2024-07-24 22:14:58.258917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.200 [2024-07-24 22:14:58.258927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.200 [2024-07-24 22:14:58.267713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.200 [2024-07-24 22:14:58.267740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.200 [2024-07-24 22:14:58.267750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.200 [2024-07-24 22:14:58.276887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.200 [2024-07-24 22:14:58.276908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.200 [2024-07-24 22:14:58.276918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.200 [2024-07-24 22:14:58.285895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.200 [2024-07-24 22:14:58.285916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.200 [2024-07-24 22:14:58.285926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.200 [2024-07-24 22:14:58.294635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.200 [2024-07-24 22:14:58.294655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.200 [2024-07-24 22:14:58.294666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.200 [2024-07-24 22:14:58.303044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.200 [2024-07-24 22:14:58.303068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.200 [2024-07-24 22:14:58.303079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.200 [2024-07-24 22:14:58.312383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.200 [2024-07-24 22:14:58.312404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.200 [2024-07-24 22:14:58.312415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.200 [2024-07-24 22:14:58.322480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.200 [2024-07-24 22:14:58.322501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.200 [2024-07-24 22:14:58.322511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.200 [2024-07-24 22:14:58.330071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.200 [2024-07-24 22:14:58.330091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.200 [2024-07-24 22:14:58.330101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.200 [2024-07-24 22:14:58.339413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.200 [2024-07-24 22:14:58.339435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.200 [2024-07-24 22:14:58.339446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.200 [2024-07-24 22:14:58.348946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.200 [2024-07-24 22:14:58.348968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:8016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.200 [2024-07-24 22:14:58.348978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.200 [2024-07-24 22:14:58.358160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.200 [2024-07-24 22:14:58.358180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.200 [2024-07-24 22:14:58.358189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.200 [2024-07-24 22:14:58.366284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.200 [2024-07-24 22:14:58.366304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.200 [2024-07-24 22:14:58.366314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.200 [2024-07-24 22:14:58.375883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.200 [2024-07-24 22:14:58.375903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.200 [2024-07-24 22:14:58.375914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.200 [2024-07-24 22:14:58.383770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.200 [2024-07-24 22:14:58.383791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.200 [2024-07-24 22:14:58.383801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.200 [2024-07-24 22:14:58.393675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.200 [2024-07-24 22:14:58.393696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.200 [2024-07-24 22:14:58.393706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.200 [2024-07-24 22:14:58.400928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.200 [2024-07-24 22:14:58.400949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.200 [2024-07-24 22:14:58.400959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.200 [2024-07-24 22:14:58.410777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.200 [2024-07-24 22:14:58.410798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.200 [2024-07-24 22:14:58.410808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.461 [2024-07-24 22:14:58.419986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.461 [2024-07-24 22:14:58.420007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.461 [2024-07-24 22:14:58.420017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.461 [2024-07-24 22:14:58.429247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.461 [2024-07-24 22:14:58.429268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.461 [2024-07-24 22:14:58.429279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.461 [2024-07-24 22:14:58.438070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.461 [2024-07-24 22:14:58.438091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.461 [2024-07-24 22:14:58.438101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.461 [2024-07-24 22:14:58.447469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.461 [2024-07-24 22:14:58.447490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.461 [2024-07-24 22:14:58.447500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.461 [2024-07-24 22:14:58.455868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.461 [2024-07-24 22:14:58.455888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.461 [2024-07-24 22:14:58.455902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.461 [2024-07-24 22:14:58.465128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.461 [2024-07-24 22:14:58.465149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.461 [2024-07-24 22:14:58.465160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.461 [2024-07-24 22:14:58.474082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.461 [2024-07-24 22:14:58.474103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.461 [2024-07-24 22:14:58.474113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.461 [2024-07-24 22:14:58.482751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.461 [2024-07-24 22:14:58.482772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.461 [2024-07-24 22:14:58.482782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.461 [2024-07-24 22:14:58.492433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.461 [2024-07-24 22:14:58.492455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.461 [2024-07-24 22:14:58.492465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.461 [2024-07-24 22:14:58.500946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.461 [2024-07-24 22:14:58.500967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.461 [2024-07-24 22:14:58.500977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.461 [2024-07-24 22:14:58.510262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.461 [2024-07-24 22:14:58.510284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.461 [2024-07-24 22:14:58.510294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.461 [2024-07-24 22:14:58.518637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.461 [2024-07-24 22:14:58.518658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.461 [2024-07-24 22:14:58.518668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.461 [2024-07-24 22:14:58.527869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.461 [2024-07-24 22:14:58.527891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.461 [2024-07-24 22:14:58.527901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.461 [2024-07-24 22:14:58.536541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.461 [2024-07-24 22:14:58.536566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.461 [2024-07-24 22:14:58.536576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.462 [2024-07-24 22:14:58.545256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.462 [2024-07-24 22:14:58.545277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.462 [2024-07-24 22:14:58.545287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.462 [2024-07-24 22:14:58.554479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.462 [2024-07-24 22:14:58.554500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.462 [2024-07-24 22:14:58.554511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.462 [2024-07-24 22:14:58.562995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.462 [2024-07-24 22:14:58.563016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.462 [2024-07-24 22:14:58.563026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.462 [2024-07-24 22:14:58.572808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.462 [2024-07-24 22:14:58.572830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.462 [2024-07-24 22:14:58.572841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.462 [2024-07-24 22:14:58.581843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.462 [2024-07-24 22:14:58.581865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.462 [2024-07-24 22:14:58.581875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.462 [2024-07-24 22:14:58.591214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.462 [2024-07-24 22:14:58.591235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.462 [2024-07-24 22:14:58.591246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.462 [2024-07-24 22:14:58.599356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.462 [2024-07-24 22:14:58.599377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.462 [2024-07-24 22:14:58.599387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.462 [2024-07-24 22:14:58.608777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.462 [2024-07-24 22:14:58.608798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.462 [2024-07-24 22:14:58.608811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.462 [2024-07-24 22:14:58.617769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.462 [2024-07-24 22:14:58.617790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.462 [2024-07-24 22:14:58.617800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.462 [2024-07-24 22:14:58.627115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.462 [2024-07-24 22:14:58.627137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.462 [2024-07-24 22:14:58.627147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.462 [2024-07-24 22:14:58.635443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.462 [2024-07-24 22:14:58.635465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.462 [2024-07-24 22:14:58.635476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.462 [2024-07-24 22:14:58.644583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.462 [2024-07-24 22:14:58.644605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.462 [2024-07-24 22:14:58.644616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.462 [2024-07-24 22:14:58.654412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.462 [2024-07-24 22:14:58.654434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.462 [2024-07-24 22:14:58.654444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.462 [2024-07-24 22:14:58.662668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.462 [2024-07-24 22:14:58.662689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.462 [2024-07-24 22:14:58.662699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.462 [2024-07-24 22:14:58.671751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.462 [2024-07-24 22:14:58.671773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.462 [2024-07-24 22:14:58.671784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.722 [2024-07-24 22:14:58.680500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.722 [2024-07-24 22:14:58.680522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.722 [2024-07-24 22:14:58.680532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.722 [2024-07-24 22:14:58.689047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.722 [2024-07-24 22:14:58.689075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.722 [2024-07-24 22:14:58.689085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.722 [2024-07-24 22:14:58.698496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.722 [2024-07-24 22:14:58.698516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.722 [2024-07-24 22:14:58.698527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.722 [2024-07-24 22:14:58.706501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.722 [2024-07-24 22:14:58.706522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.723 [2024-07-24 22:14:58.706532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.723 [2024-07-24 22:14:58.715790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.723 [2024-07-24 22:14:58.715811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.723 [2024-07-24 22:14:58.715821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.723 [2024-07-24 22:14:58.725481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.723 [2024-07-24 22:14:58.725503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.723 [2024-07-24 22:14:58.725513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.723 [2024-07-24 22:14:58.734486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.723 [2024-07-24 22:14:58.734508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.723 [2024-07-24 22:14:58.734519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.723 [2024-07-24 22:14:58.742469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.723 [2024-07-24 22:14:58.742491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.723 [2024-07-24 22:14:58.742502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.723 [2024-07-24 22:14:58.751891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.723 [2024-07-24 22:14:58.751913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.723 [2024-07-24 22:14:58.751923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.723 [2024-07-24 22:14:58.761249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.723 [2024-07-24 22:14:58.761270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.723 [2024-07-24 22:14:58.761280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.723 [2024-07-24 22:14:58.770011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.723 [2024-07-24 22:14:58.770032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.723 [2024-07-24 22:14:58.770042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.723 [2024-07-24 22:14:58.778324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.723 [2024-07-24 22:14:58.778345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.723 [2024-07-24 22:14:58.778355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.723 [2024-07-24 22:14:58.788540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.723 [2024-07-24 22:14:58.788561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.723 [2024-07-24 22:14:58.788571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.723 [2024-07-24 22:14:58.796724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.723 [2024-07-24 22:14:58.796745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.723 [2024-07-24 22:14:58.796756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.723 [2024-07-24 22:14:58.805861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.723 [2024-07-24 22:14:58.805882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.723 [2024-07-24 22:14:58.805892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.723 [2024-07-24 22:14:58.814530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.723 [2024-07-24 22:14:58.814552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.723 [2024-07-24 22:14:58.814562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.723 [2024-07-24 22:14:58.823705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.723 [2024-07-24 22:14:58.823730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.723 [2024-07-24 22:14:58.823740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.723 [2024-07-24 22:14:58.832591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.723 [2024-07-24 22:14:58.832612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.723 [2024-07-24 22:14:58.832622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.723 [2024-07-24 22:14:58.842228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.723 [2024-07-24 22:14:58.842250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.723 [2024-07-24 22:14:58.842264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.723 [2024-07-24 22:14:58.850202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.723 [2024-07-24 22:14:58.850224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.723 [2024-07-24 22:14:58.850235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.723 [2024-07-24 22:14:58.859443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.723 [2024-07-24 22:14:58.859464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.723 [2024-07-24 22:14:58.859475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.723 [2024-07-24 22:14:58.869383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.723 [2024-07-24 22:14:58.869404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.723 [2024-07-24 22:14:58.869414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.723 [2024-07-24 22:14:58.877857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.723 [2024-07-24 22:14:58.877878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.723 [2024-07-24 22:14:58.877888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.723 [2024-07-24 22:14:58.886255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.723 [2024-07-24 22:14:58.886276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.723 [2024-07-24 22:14:58.886286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.723 [2024-07-24 22:14:58.895752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.723 [2024-07-24 22:14:58.895774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.723 [2024-07-24 22:14:58.895784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.723 [2024-07-24 22:14:58.903913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.723 [2024-07-24 22:14:58.903935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.723 [2024-07-24 22:14:58.903945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.723 [2024-07-24 22:14:58.913804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.723 [2024-07-24 22:14:58.913825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.723 [2024-07-24 22:14:58.913835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.723 [2024-07-24 22:14:58.923346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.723 [2024-07-24 22:14:58.923370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.723 [2024-07-24 22:14:58.923381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.723 [2024-07-24 22:14:58.931561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.723 [2024-07-24 22:14:58.931582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.723 [2024-07-24 22:14:58.931593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.983 [2024-07-24 22:14:58.940497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.983 [2024-07-24 22:14:58.940518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.983 [2024-07-24 22:14:58.940529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.983 [2024-07-24 22:14:58.949236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.983 [2024-07-24 22:14:58.949257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.983 [2024-07-24 22:14:58.949268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.983 [2024-07-24 22:14:58.958457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.983 [2024-07-24 22:14:58.958478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.983 [2024-07-24 22:14:58.958488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.983 [2024-07-24 22:14:58.967093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.983 [2024-07-24 22:14:58.967115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:25134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.983 [2024-07-24 22:14:58.967125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.983 [2024-07-24 22:14:58.976486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.983 [2024-07-24 22:14:58.976507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.983 [2024-07-24 22:14:58.976517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.983 [2024-07-24 22:14:58.985229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.983 [2024-07-24 22:14:58.985250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.983 [2024-07-24 22:14:58.985261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.983 [2024-07-24 22:14:58.993457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.983 [2024-07-24 22:14:58.993478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.983 [2024-07-24 22:14:58.993488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.983 [2024-07-24 22:14:59.003524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.984 [2024-07-24 22:14:59.003546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.984 [2024-07-24 22:14:59.003556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.984 [2024-07-24 22:14:59.011609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.984 [2024-07-24 22:14:59.011630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.984 [2024-07-24 22:14:59.011641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.984 [2024-07-24 22:14:59.020629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.984 [2024-07-24 22:14:59.020650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.984 [2024-07-24 22:14:59.020661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.984 [2024-07-24 22:14:59.029922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.984 [2024-07-24 22:14:59.029944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.984 [2024-07-24 22:14:59.029955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.984 [2024-07-24 22:14:59.038271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.984 [2024-07-24 22:14:59.038293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.984 [2024-07-24 22:14:59.038304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.984 [2024-07-24 22:14:59.047006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.984 [2024-07-24 22:14:59.047027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.984 [2024-07-24 22:14:59.047038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.984 [2024-07-24 22:14:59.056419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.984 [2024-07-24 22:14:59.056441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.984 [2024-07-24 22:14:59.056451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.984 [2024-07-24 22:14:59.064626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.984 [2024-07-24 22:14:59.064647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.984 [2024-07-24 22:14:59.064658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.984 [2024-07-24 22:14:59.074647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.984 [2024-07-24 22:14:59.074668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.984 [2024-07-24 22:14:59.074682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.984 [2024-07-24 22:14:59.084533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.984 [2024-07-24 22:14:59.084554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.984 [2024-07-24 22:14:59.084564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.984 [2024-07-24 22:14:59.093143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.984 [2024-07-24 22:14:59.093164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.984 [2024-07-24 22:14:59.093175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.984 [2024-07-24 22:14:59.102161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.984 [2024-07-24 22:14:59.102182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.984 [2024-07-24 22:14:59.102193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.984 [2024-07-24 22:14:59.110693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.984 [2024-07-24 22:14:59.110720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.984 [2024-07-24 22:14:59.110731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.984 [2024-07-24 22:14:59.120431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.984 [2024-07-24 22:14:59.120451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.984 [2024-07-24 22:14:59.120461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.984 [2024-07-24 22:14:59.129909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.984 [2024-07-24 22:14:59.129929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.984 [2024-07-24 22:14:59.129940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.984 [2024-07-24 22:14:59.137893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.984 [2024-07-24 22:14:59.137914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:16377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.984 [2024-07-24 22:14:59.137925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.984 [2024-07-24 22:14:59.147405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.984 [2024-07-24 22:14:59.147425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.984 [2024-07-24 22:14:59.147436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.984 [2024-07-24 22:14:59.156386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.984 [2024-07-24 22:14:59.156407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.984 [2024-07-24 22:14:59.156417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.984 [2024-07-24 22:14:59.165659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.984 [2024-07-24 22:14:59.165680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.984 [2024-07-24 22:14:59.165690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.984 [2024-07-24 22:14:59.174251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.984 [2024-07-24 22:14:59.174272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.984 [2024-07-24 22:14:59.174282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.984 [2024-07-24 22:14:59.183788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.984 [2024-07-24 22:14:59.183808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.984 [2024-07-24 22:14:59.183818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.984 [2024-07-24 22:14:59.191746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:19.984 [2024-07-24 22:14:59.191767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.984 [2024-07-24 22:14:59.191777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.244 [2024-07-24 22:14:59.201893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:20.244 [2024-07-24 22:14:59.201914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.244 [2024-07-24 22:14:59.201925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.244 [2024-07-24 22:14:59.210819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11ef1c0) 00:27:20.244 [2024-07-24 22:14:59.210839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.244 [2024-07-24 22:14:59.210850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.244 00:27:20.244 Latency(us) 00:27:20.244 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:20.244 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:20.244 nvme0n1 : 2.00 28438.42 111.09 0.00 0.00 4496.03 2254.44 12268.34 00:27:20.244 =================================================================================================================== 00:27:20.244 Total : 28438.42 111.09 0.00 0.00 4496.03 2254.44 12268.34 00:27:20.244 0 00:27:20.244 22:14:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:20.244 22:14:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:20.244 22:14:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:20.244 | .driver_specific 00:27:20.244 | .nvme_error 00:27:20.244 | .status_code 00:27:20.244 | .command_transient_transport_error' 00:27:20.244 22:14:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:20.244 22:14:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 223 > 0 )) 00:27:20.244 22:14:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2839187 00:27:20.244 22:14:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2839187 ']' 00:27:20.244 22:14:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2839187 00:27:20.244 22:14:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:27:20.244 22:14:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:20.244 22:14:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2839187 00:27:20.504 22:14:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:20.504 22:14:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:20.504 22:14:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2839187' 00:27:20.504 killing process with pid 2839187 00:27:20.504 22:14:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2839187 00:27:20.504 Received shutdown signal, test time was about 2.000000 seconds 00:27:20.504 00:27:20.504 Latency(us) 00:27:20.504 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:20.504 =================================================================================================================== 00:27:20.504 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:20.504 22:14:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2839187 00:27:20.504 22:14:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:27:20.504 22:14:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:20.504 22:14:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:20.504 22:14:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:20.504 22:14:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:20.504 22:14:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2839754 00:27:20.504 22:14:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2839754 /var/tmp/bperf.sock 00:27:20.504 22:14:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:20.504 22:14:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2839754 ']' 00:27:20.504 22:14:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:20.504 22:14:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:20.504 22:14:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:20.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:20.504 22:14:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:20.504 22:14:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:20.504 [2024-07-24 22:14:59.694989] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:27:20.504 [2024-07-24 22:14:59.695041] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2839754 ] 00:27:20.504 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:20.504 Zero copy mechanism will not be used. 00:27:20.763 EAL: No free 2048 kB hugepages reported on node 1 00:27:20.763 [2024-07-24 22:14:59.763747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.763 [2024-07-24 22:14:59.834893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:21.331 22:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:21.331 22:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:27:21.331 22:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:21.331 22:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:21.591 22:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:21.591 22:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.591 22:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:21.591 22:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.591 22:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:21.591 22:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:21.851 nvme0n1 00:27:21.851 22:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:21.851 22:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.851 22:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:21.851 22:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.851 22:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:21.851 22:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:21.851 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:21.851 Zero copy mechanism will not be used. 00:27:21.851 Running I/O for 2 seconds... 00:27:21.851 [2024-07-24 22:15:01.018738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:21.851 [2024-07-24 22:15:01.018776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.851 [2024-07-24 22:15:01.018790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.851 [2024-07-24 22:15:01.029605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:21.851 [2024-07-24 22:15:01.029633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.851 [2024-07-24 22:15:01.029649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.851 [2024-07-24 22:15:01.038964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:21.851 [2024-07-24 22:15:01.038989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.851 [2024-07-24 22:15:01.039001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.851 [2024-07-24 22:15:01.048631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:21.851 [2024-07-24 22:15:01.048655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.851 [2024-07-24 22:15:01.048666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.851 [2024-07-24 22:15:01.058049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:21.851 [2024-07-24 22:15:01.058072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.851 [2024-07-24 22:15:01.058083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.112 [2024-07-24 22:15:01.066165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.112 [2024-07-24 22:15:01.066189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.112 [2024-07-24 22:15:01.066199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.112 [2024-07-24 22:15:01.073204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.112 [2024-07-24 22:15:01.073227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.112 [2024-07-24 22:15:01.073237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.112 [2024-07-24 22:15:01.079782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.112 [2024-07-24 22:15:01.079803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.112 [2024-07-24 22:15:01.079814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.112 [2024-07-24 22:15:01.086218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.112 [2024-07-24 22:15:01.086240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.112 [2024-07-24 22:15:01.086250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.112 [2024-07-24 22:15:01.092122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.112 [2024-07-24 22:15:01.092146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.112 [2024-07-24 22:15:01.092156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.112 [2024-07-24 22:15:01.098475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.112 [2024-07-24 22:15:01.098498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.112 [2024-07-24 22:15:01.098508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.112 [2024-07-24 22:15:01.104776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.112 [2024-07-24 22:15:01.104798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.112 [2024-07-24 22:15:01.104809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.112 [2024-07-24 22:15:01.110965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.112 [2024-07-24 22:15:01.110987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.112 [2024-07-24 22:15:01.110998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.112 [2024-07-24 22:15:01.117150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.112 [2024-07-24 22:15:01.117172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.112 [2024-07-24 22:15:01.117182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.112 [2024-07-24 22:15:01.123554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.112 [2024-07-24 22:15:01.123576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.112 [2024-07-24 22:15:01.123586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.112 [2024-07-24 22:15:01.130244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.112 [2024-07-24 22:15:01.130267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.112 [2024-07-24 22:15:01.130277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.112 [2024-07-24 22:15:01.136493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.112 [2024-07-24 22:15:01.136515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.112 [2024-07-24 22:15:01.136526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.112 [2024-07-24 22:15:01.149396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.112 [2024-07-24 22:15:01.149419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.112 [2024-07-24 22:15:01.149429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.112 [2024-07-24 22:15:01.159416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.112 [2024-07-24 22:15:01.159438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.112 [2024-07-24 22:15:01.159451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.112 [2024-07-24 22:15:01.168110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.112 [2024-07-24 22:15:01.168133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.112 [2024-07-24 22:15:01.168143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.112 [2024-07-24 22:15:01.175319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.112 [2024-07-24 22:15:01.175342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.112 [2024-07-24 22:15:01.175352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.112 [2024-07-24 22:15:01.182251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.112 [2024-07-24 22:15:01.182275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.112 [2024-07-24 22:15:01.182285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.112 [2024-07-24 22:15:01.188850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.112 [2024-07-24 22:15:01.188873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.112 [2024-07-24 22:15:01.188884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.112 [2024-07-24 22:15:01.195140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.112 [2024-07-24 22:15:01.195162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.112 [2024-07-24 22:15:01.195173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.112 [2024-07-24 22:15:01.201361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.112 [2024-07-24 22:15:01.201384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.112 [2024-07-24 22:15:01.201394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.112 [2024-07-24 22:15:01.207550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.112 [2024-07-24 22:15:01.207573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.112 [2024-07-24 22:15:01.207583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.112 [2024-07-24 22:15:01.213761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.112 [2024-07-24 22:15:01.213783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.112 [2024-07-24 22:15:01.213793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.113 [2024-07-24 22:15:01.219970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.113 [2024-07-24 22:15:01.219997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.113 [2024-07-24 22:15:01.220008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.113 [2024-07-24 22:15:01.226141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.113 [2024-07-24 22:15:01.226164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.113 [2024-07-24 22:15:01.226174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.113 [2024-07-24 22:15:01.232371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.113 [2024-07-24 22:15:01.232393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.113 [2024-07-24 22:15:01.232403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.113 [2024-07-24 22:15:01.238564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.113 [2024-07-24 22:15:01.238586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.113 [2024-07-24 22:15:01.238596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.113 [2024-07-24 22:15:01.244753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.113 [2024-07-24 22:15:01.244775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.113 [2024-07-24 22:15:01.244785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.113 [2024-07-24 22:15:01.250905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.113 [2024-07-24 22:15:01.250928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.113 [2024-07-24 22:15:01.250938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.113 [2024-07-24 22:15:01.257040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.113 [2024-07-24 22:15:01.257063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.113 [2024-07-24 22:15:01.257073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.113 [2024-07-24 22:15:01.263171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.113 [2024-07-24 22:15:01.263193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.113 [2024-07-24 22:15:01.263203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.113 [2024-07-24 22:15:01.269324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.113 [2024-07-24 22:15:01.269363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.113 [2024-07-24 22:15:01.269373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.113 [2024-07-24 22:15:01.275883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.113 [2024-07-24 22:15:01.275905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.113 [2024-07-24 22:15:01.275916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.113 [2024-07-24 22:15:01.282135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.113 [2024-07-24 22:15:01.282157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.113 [2024-07-24 22:15:01.282168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.113 [2024-07-24 22:15:01.288520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.113 [2024-07-24 22:15:01.288542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.113 [2024-07-24 22:15:01.288551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.113 [2024-07-24 22:15:01.294703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.113 [2024-07-24 22:15:01.294732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.113 [2024-07-24 22:15:01.294741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.113 [2024-07-24 22:15:01.300914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.113 [2024-07-24 22:15:01.300935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.113 [2024-07-24 22:15:01.300944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.113 [2024-07-24 22:15:01.307100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.113 [2024-07-24 22:15:01.307122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.113 [2024-07-24 22:15:01.307132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.113 [2024-07-24 22:15:01.313288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.113 [2024-07-24 22:15:01.313310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.113 [2024-07-24 22:15:01.313320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.113 [2024-07-24 22:15:01.319416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.113 [2024-07-24 22:15:01.319438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.113 [2024-07-24 22:15:01.319448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.374 [2024-07-24 22:15:01.325617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.374 [2024-07-24 22:15:01.325641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.374 [2024-07-24 22:15:01.325655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.374 [2024-07-24 22:15:01.331788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.374 [2024-07-24 22:15:01.331809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.374 [2024-07-24 22:15:01.331820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.374 [2024-07-24 22:15:01.337933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.374 [2024-07-24 22:15:01.337955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.374 [2024-07-24 22:15:01.337965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.374 [2024-07-24 22:15:01.344214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.374 [2024-07-24 22:15:01.344236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.374 [2024-07-24 22:15:01.344246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.374 [2024-07-24 22:15:01.351079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.374 [2024-07-24 22:15:01.351100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.374 [2024-07-24 22:15:01.351110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.374 [2024-07-24 22:15:01.357311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.374 [2024-07-24 22:15:01.357332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.374 [2024-07-24 22:15:01.357342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.374 [2024-07-24 22:15:01.363459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.374 [2024-07-24 22:15:01.363481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.374 [2024-07-24 22:15:01.363491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.374 [2024-07-24 22:15:01.369658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.374 [2024-07-24 22:15:01.369681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.374 [2024-07-24 22:15:01.369691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.374 [2024-07-24 22:15:01.375831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.374 [2024-07-24 22:15:01.375854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.374 [2024-07-24 22:15:01.375864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.374 [2024-07-24 22:15:01.382042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.374 [2024-07-24 22:15:01.382068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.374 [2024-07-24 22:15:01.382078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.374 [2024-07-24 22:15:01.388166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.374 [2024-07-24 22:15:01.388188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.374 [2024-07-24 22:15:01.388199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.374 [2024-07-24 22:15:01.394280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.374 [2024-07-24 22:15:01.394302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.374 [2024-07-24 22:15:01.394313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.374 [2024-07-24 22:15:01.400473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.374 [2024-07-24 22:15:01.400497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.374 [2024-07-24 22:15:01.400508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.374 [2024-07-24 22:15:01.406631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.374 [2024-07-24 22:15:01.406653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.374 [2024-07-24 22:15:01.406663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.374 [2024-07-24 22:15:01.412797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.374 [2024-07-24 22:15:01.412819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.374 [2024-07-24 22:15:01.412829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.374 [2024-07-24 22:15:01.419812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.374 [2024-07-24 22:15:01.419834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.374 [2024-07-24 22:15:01.419844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.374 [2024-07-24 22:15:01.426862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.374 [2024-07-24 22:15:01.426884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.374 [2024-07-24 22:15:01.426894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.374 [2024-07-24 22:15:01.436550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.374 [2024-07-24 22:15:01.436572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.374 [2024-07-24 22:15:01.436582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.374 [2024-07-24 22:15:01.447988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.374 [2024-07-24 22:15:01.448012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.374 [2024-07-24 22:15:01.448022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.374 [2024-07-24 22:15:01.457699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.374 [2024-07-24 22:15:01.457726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.374 [2024-07-24 22:15:01.457737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.374 [2024-07-24 22:15:01.467450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.374 [2024-07-24 22:15:01.467474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.374 [2024-07-24 22:15:01.467485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.374 [2024-07-24 22:15:01.476959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.374 [2024-07-24 22:15:01.476982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.374 [2024-07-24 22:15:01.476994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.374 [2024-07-24 22:15:01.485331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.374 [2024-07-24 22:15:01.485355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.374 [2024-07-24 22:15:01.485365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.374 [2024-07-24 22:15:01.494420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.374 [2024-07-24 22:15:01.494444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.374 [2024-07-24 22:15:01.494455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.374 [2024-07-24 22:15:01.505375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.374 [2024-07-24 22:15:01.505398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.374 [2024-07-24 22:15:01.505409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.374 [2024-07-24 22:15:01.518926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.374 [2024-07-24 22:15:01.518948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.374 [2024-07-24 22:15:01.518958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.374 [2024-07-24 22:15:01.528969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.374 [2024-07-24 22:15:01.528992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.374 [2024-07-24 22:15:01.529005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.374 [2024-07-24 22:15:01.537512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.374 [2024-07-24 22:15:01.537534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.374 [2024-07-24 22:15:01.537544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.374 [2024-07-24 22:15:01.544979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.374 [2024-07-24 22:15:01.545001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.374 [2024-07-24 22:15:01.545011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.374 [2024-07-24 22:15:01.551901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.374 [2024-07-24 22:15:01.551923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.374 [2024-07-24 22:15:01.551933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.374 [2024-07-24 22:15:01.558387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.374 [2024-07-24 22:15:01.558409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.374 [2024-07-24 22:15:01.558419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.374 [2024-07-24 22:15:01.565134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.374 [2024-07-24 22:15:01.565156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.374 [2024-07-24 22:15:01.565167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.374 [2024-07-24 22:15:01.572468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.374 [2024-07-24 22:15:01.572490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.374 [2024-07-24 22:15:01.572499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.374 [2024-07-24 22:15:01.585134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.374 [2024-07-24 22:15:01.585156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.374 [2024-07-24 22:15:01.585166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.635 [2024-07-24 22:15:01.594986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.635 [2024-07-24 22:15:01.595008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-07-24 22:15:01.595018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.635 [2024-07-24 22:15:01.603377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.635 [2024-07-24 22:15:01.603399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-07-24 22:15:01.603409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.635 [2024-07-24 22:15:01.610463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.635 [2024-07-24 22:15:01.610485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-07-24 22:15:01.610495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.635 [2024-07-24 22:15:01.617168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.635 [2024-07-24 22:15:01.617190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-07-24 22:15:01.617200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.635 [2024-07-24 22:15:01.623608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.635 [2024-07-24 22:15:01.623630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-07-24 22:15:01.623641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.635 [2024-07-24 22:15:01.630124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.635 [2024-07-24 22:15:01.630148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-07-24 22:15:01.630159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.635 [2024-07-24 22:15:01.636694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.635 [2024-07-24 22:15:01.636722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-07-24 22:15:01.636733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.635 [2024-07-24 22:15:01.643141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.635 [2024-07-24 22:15:01.643164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-07-24 22:15:01.643174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.635 [2024-07-24 22:15:01.650160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.635 [2024-07-24 22:15:01.650184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-07-24 22:15:01.650195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.635 [2024-07-24 22:15:01.656654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.635 [2024-07-24 22:15:01.656678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-07-24 22:15:01.656695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.635 [2024-07-24 22:15:01.662957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.635 [2024-07-24 22:15:01.662981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-07-24 22:15:01.662992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.635 [2024-07-24 22:15:01.669235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.635 [2024-07-24 22:15:01.669260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-07-24 22:15:01.669270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.635 [2024-07-24 22:15:01.675482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.635 [2024-07-24 22:15:01.675507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-07-24 22:15:01.675518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.635 [2024-07-24 22:15:01.681713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.635 [2024-07-24 22:15:01.681743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-07-24 22:15:01.681754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.635 [2024-07-24 22:15:01.687934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.635 [2024-07-24 22:15:01.687958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-07-24 22:15:01.687968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.635 [2024-07-24 22:15:01.694175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.635 [2024-07-24 22:15:01.694200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-07-24 22:15:01.694211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.635 [2024-07-24 22:15:01.700405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.635 [2024-07-24 22:15:01.700427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-07-24 22:15:01.700438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.635 [2024-07-24 22:15:01.706583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.635 [2024-07-24 22:15:01.706606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-07-24 22:15:01.706616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.635 [2024-07-24 22:15:01.712756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.635 [2024-07-24 22:15:01.712781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-07-24 22:15:01.712791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.635 [2024-07-24 22:15:01.718988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.635 [2024-07-24 22:15:01.719011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-07-24 22:15:01.719021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.635 [2024-07-24 22:15:01.725216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.635 [2024-07-24 22:15:01.725239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-07-24 22:15:01.725249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.636 [2024-07-24 22:15:01.731432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.636 [2024-07-24 22:15:01.731455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-07-24 22:15:01.731465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.636 [2024-07-24 22:15:01.737654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.636 [2024-07-24 22:15:01.737677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-07-24 22:15:01.737686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.636 [2024-07-24 22:15:01.743846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.636 [2024-07-24 22:15:01.743868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-07-24 22:15:01.743878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.636 [2024-07-24 22:15:01.750059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.636 [2024-07-24 22:15:01.750081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-07-24 22:15:01.750092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.636 [2024-07-24 22:15:01.756313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.636 [2024-07-24 22:15:01.756336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-07-24 22:15:01.756346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.636 [2024-07-24 22:15:01.762722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.636 [2024-07-24 22:15:01.762744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-07-24 22:15:01.762755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.636 [2024-07-24 22:15:01.768982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.636 [2024-07-24 22:15:01.769004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-07-24 22:15:01.769015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.636 [2024-07-24 22:15:01.775203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.636 [2024-07-24 22:15:01.775226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-07-24 22:15:01.775237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.636 [2024-07-24 22:15:01.781453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.636 [2024-07-24 22:15:01.781475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-07-24 22:15:01.781486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.636 [2024-07-24 22:15:01.787770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.636 [2024-07-24 22:15:01.787793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-07-24 22:15:01.787804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.636 [2024-07-24 22:15:01.794043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.636 [2024-07-24 22:15:01.794067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-07-24 22:15:01.794077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.636 [2024-07-24 22:15:01.800681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.636 [2024-07-24 22:15:01.800706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-07-24 22:15:01.800722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.636 [2024-07-24 22:15:01.807580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.636 [2024-07-24 22:15:01.807603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-07-24 22:15:01.807614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.636 [2024-07-24 22:15:01.814153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.636 [2024-07-24 22:15:01.814176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-07-24 22:15:01.814187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.636 [2024-07-24 22:15:01.820611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.636 [2024-07-24 22:15:01.820633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-07-24 22:15:01.820646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.636 [2024-07-24 22:15:01.826982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.636 [2024-07-24 22:15:01.827005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-07-24 22:15:01.827017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.636 [2024-07-24 22:15:01.833347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.636 [2024-07-24 22:15:01.833370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-07-24 22:15:01.833380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.636 [2024-07-24 22:15:01.839701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.636 [2024-07-24 22:15:01.839730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-07-24 22:15:01.839741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.636 [2024-07-24 22:15:01.845995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.636 [2024-07-24 22:15:01.846019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-07-24 22:15:01.846029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.896 [2024-07-24 22:15:01.852222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.896 [2024-07-24 22:15:01.852245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.896 [2024-07-24 22:15:01.852256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.896 [2024-07-24 22:15:01.858479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.896 [2024-07-24 22:15:01.858502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.896 [2024-07-24 22:15:01.858512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.896 [2024-07-24 22:15:01.864928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.896 [2024-07-24 22:15:01.864949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.897 [2024-07-24 22:15:01.864960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.897 [2024-07-24 22:15:01.871143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.897 [2024-07-24 22:15:01.871166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.897 [2024-07-24 22:15:01.871176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.897 [2024-07-24 22:15:01.877374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.897 [2024-07-24 22:15:01.877400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.897 [2024-07-24 22:15:01.877411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.897 [2024-07-24 22:15:01.883632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.897 [2024-07-24 22:15:01.883655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.897 [2024-07-24 22:15:01.883666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.897 [2024-07-24 22:15:01.889922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.897 [2024-07-24 22:15:01.889945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.897 [2024-07-24 22:15:01.889955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.897 [2024-07-24 22:15:01.896202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.897 [2024-07-24 22:15:01.896225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.897 [2024-07-24 22:15:01.896236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.897 [2024-07-24 22:15:01.902454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.897 [2024-07-24 22:15:01.902476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.897 [2024-07-24 22:15:01.902486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.897 [2024-07-24 22:15:01.908724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.897 [2024-07-24 22:15:01.908747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.897 [2024-07-24 22:15:01.908757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.897 [2024-07-24 22:15:01.914923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.897 [2024-07-24 22:15:01.914946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.897 [2024-07-24 22:15:01.914956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.897 [2024-07-24 22:15:01.921119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.897 [2024-07-24 22:15:01.921141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.897 [2024-07-24 22:15:01.921152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.897 [2024-07-24 22:15:01.927385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.897 [2024-07-24 22:15:01.927408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.897 [2024-07-24 22:15:01.927418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.897 [2024-07-24 22:15:01.933593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.897 [2024-07-24 22:15:01.933615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.897 [2024-07-24 22:15:01.933625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.897 [2024-07-24 22:15:01.939778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.897 [2024-07-24 22:15:01.939799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.897 [2024-07-24 22:15:01.939810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.897 [2024-07-24 22:15:01.946034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.897 [2024-07-24 22:15:01.946056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.897 [2024-07-24 22:15:01.946067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.897 [2024-07-24 22:15:01.952255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.897 [2024-07-24 22:15:01.952277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.897 [2024-07-24 22:15:01.952288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.897 [2024-07-24 22:15:01.958475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.897 [2024-07-24 22:15:01.958498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.897 [2024-07-24 22:15:01.958508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.897 [2024-07-24 22:15:01.964687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.897 [2024-07-24 22:15:01.964709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.897 [2024-07-24 22:15:01.964725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.897 [2024-07-24 22:15:01.971420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.897 [2024-07-24 22:15:01.971443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.897 [2024-07-24 22:15:01.971454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.897 [2024-07-24 22:15:01.979470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.897 [2024-07-24 22:15:01.979493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.897 [2024-07-24 22:15:01.979504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.897 [2024-07-24 22:15:01.988523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.897 [2024-07-24 22:15:01.988546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.897 [2024-07-24 22:15:01.988560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.897 [2024-07-24 22:15:01.997870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.897 [2024-07-24 22:15:01.997893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.897 [2024-07-24 22:15:01.997903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.897 [2024-07-24 22:15:02.007057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.897 [2024-07-24 22:15:02.007080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.897 [2024-07-24 22:15:02.007091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.897 [2024-07-24 22:15:02.016354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.897 [2024-07-24 22:15:02.016378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.897 [2024-07-24 22:15:02.016388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.897 [2024-07-24 22:15:02.025554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.897 [2024-07-24 22:15:02.025577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.897 [2024-07-24 22:15:02.025588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.898 [2024-07-24 22:15:02.034974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.898 [2024-07-24 22:15:02.034998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.898 [2024-07-24 22:15:02.035009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.898 [2024-07-24 22:15:02.043921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.898 [2024-07-24 22:15:02.043945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.898 [2024-07-24 22:15:02.043956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.898 [2024-07-24 22:15:02.053116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.898 [2024-07-24 22:15:02.053140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.898 [2024-07-24 22:15:02.053151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.898 [2024-07-24 22:15:02.062872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.898 [2024-07-24 22:15:02.062897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.898 [2024-07-24 22:15:02.062908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.898 [2024-07-24 22:15:02.073033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.898 [2024-07-24 22:15:02.073057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.898 [2024-07-24 22:15:02.073067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.898 [2024-07-24 22:15:02.082597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.898 [2024-07-24 22:15:02.082620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.898 [2024-07-24 22:15:02.082631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.898 [2024-07-24 22:15:02.091886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.898 [2024-07-24 22:15:02.091909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.898 [2024-07-24 22:15:02.091919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.898 [2024-07-24 22:15:02.100574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:22.898 [2024-07-24 22:15:02.100598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.898 [2024-07-24 22:15:02.100609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.158 [2024-07-24 22:15:02.110758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.158 [2024-07-24 22:15:02.110782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.158 [2024-07-24 22:15:02.110793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.158 [2024-07-24 22:15:02.119743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.158 [2024-07-24 22:15:02.119766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.158 [2024-07-24 22:15:02.119777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.158 [2024-07-24 22:15:02.129901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.158 [2024-07-24 22:15:02.129924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.158 [2024-07-24 22:15:02.129935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.158 [2024-07-24 22:15:02.139420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.158 [2024-07-24 22:15:02.139444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.158 [2024-07-24 22:15:02.139454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.158 [2024-07-24 22:15:02.150003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.158 [2024-07-24 22:15:02.150027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.158 [2024-07-24 22:15:02.150041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.158 [2024-07-24 22:15:02.159902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.158 [2024-07-24 22:15:02.159926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.158 [2024-07-24 22:15:02.159936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.158 [2024-07-24 22:15:02.171893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.158 [2024-07-24 22:15:02.171917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.158 [2024-07-24 22:15:02.171927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.158 [2024-07-24 22:15:02.183568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.158 [2024-07-24 22:15:02.183592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.158 [2024-07-24 22:15:02.183602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.158 [2024-07-24 22:15:02.195143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.158 [2024-07-24 22:15:02.195167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.158 [2024-07-24 22:15:02.195178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.158 [2024-07-24 22:15:02.206678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.158 [2024-07-24 22:15:02.206702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.158 [2024-07-24 22:15:02.206713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.158 [2024-07-24 22:15:02.217128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.158 [2024-07-24 22:15:02.217152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.158 [2024-07-24 22:15:02.217163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.158 [2024-07-24 22:15:02.228671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.158 [2024-07-24 22:15:02.228694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.158 [2024-07-24 22:15:02.228705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.158 [2024-07-24 22:15:02.239482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.158 [2024-07-24 22:15:02.239505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.159 [2024-07-24 22:15:02.239516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.159 [2024-07-24 22:15:02.250315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.159 [2024-07-24 22:15:02.250342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.159 [2024-07-24 22:15:02.250353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.159 [2024-07-24 22:15:02.260342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.159 [2024-07-24 22:15:02.260367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.159 [2024-07-24 22:15:02.260378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.159 [2024-07-24 22:15:02.270445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.159 [2024-07-24 22:15:02.270470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.159 [2024-07-24 22:15:02.270481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.159 [2024-07-24 22:15:02.281720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.159 [2024-07-24 22:15:02.281743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.159 [2024-07-24 22:15:02.281754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.159 [2024-07-24 22:15:02.291473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.159 [2024-07-24 22:15:02.291497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.159 [2024-07-24 22:15:02.291508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.159 [2024-07-24 22:15:02.301468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.159 [2024-07-24 22:15:02.301492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.159 [2024-07-24 22:15:02.301503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.159 [2024-07-24 22:15:02.311662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.159 [2024-07-24 22:15:02.311685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.159 [2024-07-24 22:15:02.311696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.159 [2024-07-24 22:15:02.322316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.159 [2024-07-24 22:15:02.322339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.159 [2024-07-24 22:15:02.322350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.159 [2024-07-24 22:15:02.332562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.159 [2024-07-24 22:15:02.332585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.159 [2024-07-24 22:15:02.332595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.159 [2024-07-24 22:15:02.341002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.159 [2024-07-24 22:15:02.341025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.159 [2024-07-24 22:15:02.341035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.159 [2024-07-24 22:15:02.348457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.159 [2024-07-24 22:15:02.348479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.159 [2024-07-24 22:15:02.348490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.159 [2024-07-24 22:15:02.355736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.159 [2024-07-24 22:15:02.355759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.159 [2024-07-24 22:15:02.355769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.159 [2024-07-24 22:15:02.363052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.159 [2024-07-24 22:15:02.363076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.159 [2024-07-24 22:15:02.363086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.419 [2024-07-24 22:15:02.371571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.419 [2024-07-24 22:15:02.371596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.419 [2024-07-24 22:15:02.371607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.419 [2024-07-24 22:15:02.381086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.419 [2024-07-24 22:15:02.381109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.419 [2024-07-24 22:15:02.381120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.419 [2024-07-24 22:15:02.391972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.419 [2024-07-24 22:15:02.391995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.419 [2024-07-24 22:15:02.392007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.419 [2024-07-24 22:15:02.402477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.419 [2024-07-24 22:15:02.402500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.419 [2024-07-24 22:15:02.402510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.419 [2024-07-24 22:15:02.414075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.419 [2024-07-24 22:15:02.414101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.419 [2024-07-24 22:15:02.414115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.419 [2024-07-24 22:15:02.424824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.419 [2024-07-24 22:15:02.424847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.419 [2024-07-24 22:15:02.424858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.419 [2024-07-24 22:15:02.435560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.419 [2024-07-24 22:15:02.435584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.419 [2024-07-24 22:15:02.435594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.419 [2024-07-24 22:15:02.445427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.419 [2024-07-24 22:15:02.445451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.419 [2024-07-24 22:15:02.445462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.419 [2024-07-24 22:15:02.456190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.419 [2024-07-24 22:15:02.456213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.419 [2024-07-24 22:15:02.456224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.419 [2024-07-24 22:15:02.466229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.419 [2024-07-24 22:15:02.466253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.419 [2024-07-24 22:15:02.466263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.419 [2024-07-24 22:15:02.476033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.419 [2024-07-24 22:15:02.476056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.419 [2024-07-24 22:15:02.476067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.419 [2024-07-24 22:15:02.486500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.419 [2024-07-24 22:15:02.486523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.419 [2024-07-24 22:15:02.486534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.419 [2024-07-24 22:15:02.496733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.419 [2024-07-24 22:15:02.496755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.419 [2024-07-24 22:15:02.496766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.419 [2024-07-24 22:15:02.506698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.419 [2024-07-24 22:15:02.506729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.419 [2024-07-24 22:15:02.506740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.419 [2024-07-24 22:15:02.515572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.419 [2024-07-24 22:15:02.515594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.419 [2024-07-24 22:15:02.515605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.419 [2024-07-24 22:15:02.523657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.419 [2024-07-24 22:15:02.523680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.419 [2024-07-24 22:15:02.523690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.419 [2024-07-24 22:15:02.531804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.419 [2024-07-24 22:15:02.531827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.419 [2024-07-24 22:15:02.531837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.419 [2024-07-24 22:15:02.540914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.419 [2024-07-24 22:15:02.540937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.419 [2024-07-24 22:15:02.540948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.419 [2024-07-24 22:15:02.549757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.419 [2024-07-24 22:15:02.549779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.419 [2024-07-24 22:15:02.549790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.419 [2024-07-24 22:15:02.558902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.419 [2024-07-24 22:15:02.558925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.419 [2024-07-24 22:15:02.558935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.419 [2024-07-24 22:15:02.567192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.419 [2024-07-24 22:15:02.567214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.419 [2024-07-24 22:15:02.567225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.419 [2024-07-24 22:15:02.575198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.420 [2024-07-24 22:15:02.575221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.420 [2024-07-24 22:15:02.575231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.420 [2024-07-24 22:15:02.583173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.420 [2024-07-24 22:15:02.583196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.420 [2024-07-24 22:15:02.583207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.420 [2024-07-24 22:15:02.591684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.420 [2024-07-24 22:15:02.591707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.420 [2024-07-24 22:15:02.591725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.420 [2024-07-24 22:15:02.599388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.420 [2024-07-24 22:15:02.599410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.420 [2024-07-24 22:15:02.599421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.420 [2024-07-24 22:15:02.607356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.420 [2024-07-24 22:15:02.607379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.420 [2024-07-24 22:15:02.607390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.420 [2024-07-24 22:15:02.615063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.420 [2024-07-24 22:15:02.615086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.420 [2024-07-24 22:15:02.615097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.420 [2024-07-24 22:15:02.623403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.420 [2024-07-24 22:15:02.623426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.420 [2024-07-24 22:15:02.623437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.680 [2024-07-24 22:15:02.632226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.680 [2024-07-24 22:15:02.632251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.680 [2024-07-24 22:15:02.632262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.680 [2024-07-24 22:15:02.641354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.680 [2024-07-24 22:15:02.641376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.680 [2024-07-24 22:15:02.641387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.680 [2024-07-24 22:15:02.650813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.680 [2024-07-24 22:15:02.650837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.680 [2024-07-24 22:15:02.650852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.680 [2024-07-24 22:15:02.662316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.680 [2024-07-24 22:15:02.662340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.680 [2024-07-24 22:15:02.662350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.680 [2024-07-24 22:15:02.674174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.680 [2024-07-24 22:15:02.674198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.680 [2024-07-24 22:15:02.674209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.680 [2024-07-24 22:15:02.684429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.680 [2024-07-24 22:15:02.684453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.680 [2024-07-24 22:15:02.684463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.680 [2024-07-24 22:15:02.696019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.680 [2024-07-24 22:15:02.696042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.680 [2024-07-24 22:15:02.696053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.680 [2024-07-24 22:15:02.706801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.680 [2024-07-24 22:15:02.706825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.680 [2024-07-24 22:15:02.706835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.680 [2024-07-24 22:15:02.716540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.680 [2024-07-24 22:15:02.716563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.680 [2024-07-24 22:15:02.716574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.680 [2024-07-24 22:15:02.727090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.680 [2024-07-24 22:15:02.727113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.680 [2024-07-24 22:15:02.727124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.680 [2024-07-24 22:15:02.737080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.680 [2024-07-24 22:15:02.737104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.680 [2024-07-24 22:15:02.737115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.680 [2024-07-24 22:15:02.746642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.680 [2024-07-24 22:15:02.746666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.680 [2024-07-24 22:15:02.746677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.680 [2024-07-24 22:15:02.755264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.680 [2024-07-24 22:15:02.755287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.680 [2024-07-24 22:15:02.755297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.680 [2024-07-24 22:15:02.764482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.680 [2024-07-24 22:15:02.764505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.680 [2024-07-24 22:15:02.764516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.680 [2024-07-24 22:15:02.774233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.680 [2024-07-24 22:15:02.774256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.680 [2024-07-24 22:15:02.774267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.680 [2024-07-24 22:15:02.784264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.680 [2024-07-24 22:15:02.784287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.680 [2024-07-24 22:15:02.784297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.680 [2024-07-24 22:15:02.794533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.680 [2024-07-24 22:15:02.794558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.680 [2024-07-24 22:15:02.794569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.680 [2024-07-24 22:15:02.806793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.681 [2024-07-24 22:15:02.806817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.681 [2024-07-24 22:15:02.806828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.681 [2024-07-24 22:15:02.818669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.681 [2024-07-24 22:15:02.818692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.681 [2024-07-24 22:15:02.818703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.681 [2024-07-24 22:15:02.829892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.681 [2024-07-24 22:15:02.829917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.681 [2024-07-24 22:15:02.829931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.681 [2024-07-24 22:15:02.840541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.681 [2024-07-24 22:15:02.840565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.681 [2024-07-24 22:15:02.840576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.681 [2024-07-24 22:15:02.849941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.681 [2024-07-24 22:15:02.849965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.681 [2024-07-24 22:15:02.849975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.681 [2024-07-24 22:15:02.859804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.681 [2024-07-24 22:15:02.859839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.681 [2024-07-24 22:15:02.859849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.681 [2024-07-24 22:15:02.870359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.681 [2024-07-24 22:15:02.870383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.681 [2024-07-24 22:15:02.870394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.681 [2024-07-24 22:15:02.880962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.681 [2024-07-24 22:15:02.880986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.681 [2024-07-24 22:15:02.880997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.681 [2024-07-24 22:15:02.890905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.681 [2024-07-24 22:15:02.890929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.681 [2024-07-24 22:15:02.890940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.941 [2024-07-24 22:15:02.900889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.941 [2024-07-24 22:15:02.900913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.941 [2024-07-24 22:15:02.900924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.941 [2024-07-24 22:15:02.911504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.941 [2024-07-24 22:15:02.911528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.941 [2024-07-24 22:15:02.911540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.941 [2024-07-24 22:15:02.921914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.941 [2024-07-24 22:15:02.921940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.941 [2024-07-24 22:15:02.921950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.941 [2024-07-24 22:15:02.932883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.941 [2024-07-24 22:15:02.932907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.941 [2024-07-24 22:15:02.932917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.941 [2024-07-24 22:15:02.943938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.941 [2024-07-24 22:15:02.943961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.941 [2024-07-24 22:15:02.943971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.941 [2024-07-24 22:15:02.954062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.941 [2024-07-24 22:15:02.954085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.941 [2024-07-24 22:15:02.954095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.941 [2024-07-24 22:15:02.963573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.941 [2024-07-24 22:15:02.963597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.941 [2024-07-24 22:15:02.963608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.941 [2024-07-24 22:15:02.972523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.941 [2024-07-24 22:15:02.972547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.941 [2024-07-24 22:15:02.972558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.941 [2024-07-24 22:15:02.981959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.941 [2024-07-24 22:15:02.981982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.941 [2024-07-24 22:15:02.981993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.941 [2024-07-24 22:15:02.992448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.941 [2024-07-24 22:15:02.992471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.941 [2024-07-24 22:15:02.992482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.941 [2024-07-24 22:15:03.001834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1953bf0) 00:27:23.941 [2024-07-24 22:15:03.001868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.941 [2024-07-24 22:15:03.001879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.941 00:27:23.941 Latency(us) 00:27:23.941 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:23.941 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:23.941 nvme0n1 : 2.00 3777.89 472.24 0.00 0.00 4232.28 1002.70 14155.78 00:27:23.941 =================================================================================================================== 00:27:23.941 Total : 3777.89 472.24 0.00 0.00 4232.28 1002.70 14155.78 00:27:23.941 0 00:27:23.941 22:15:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:23.941 22:15:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:23.941 22:15:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:23.941 | .driver_specific 00:27:23.941 | .nvme_error 00:27:23.941 | .status_code 00:27:23.941 | .command_transient_transport_error' 00:27:23.941 22:15:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:24.200 22:15:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 243 > 0 )) 00:27:24.200 22:15:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2839754 00:27:24.200 22:15:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2839754 ']' 00:27:24.200 22:15:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2839754 00:27:24.200 22:15:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:27:24.200 22:15:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:24.200 22:15:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2839754 00:27:24.200 22:15:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:24.200 22:15:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:24.200 22:15:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2839754' 00:27:24.200 killing process with pid 2839754 00:27:24.200 22:15:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2839754 00:27:24.200 Received shutdown signal, test time was about 2.000000 seconds 00:27:24.200 00:27:24.200 Latency(us) 00:27:24.200 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:24.200 =================================================================================================================== 00:27:24.200 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:24.200 22:15:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2839754 00:27:24.458 22:15:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:24.458 22:15:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:24.458 22:15:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:24.458 22:15:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:24.458 22:15:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:24.458 22:15:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2840510 00:27:24.458 22:15:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2840510 /var/tmp/bperf.sock 00:27:24.458 22:15:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:24.458 22:15:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2840510 ']' 00:27:24.458 22:15:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:24.458 22:15:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:24.458 22:15:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:24.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:24.458 22:15:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:24.458 22:15:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:24.458 [2024-07-24 22:15:03.494425] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:27:24.458 [2024-07-24 22:15:03.494479] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2840510 ] 00:27:24.458 EAL: No free 2048 kB hugepages reported on node 1 00:27:24.458 [2024-07-24 22:15:03.565465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:24.458 [2024-07-24 22:15:03.632006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:25.394 22:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:25.394 22:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:27:25.394 22:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:25.394 22:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:25.394 22:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:25.394 22:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.394 22:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:25.394 22:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.394 22:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:25.394 22:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:25.652 nvme0n1 00:27:25.653 22:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:25.653 22:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.653 22:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:25.653 22:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.653 22:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:25.653 22:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:25.912 Running I/O for 2 seconds... 00:27:25.912 [2024-07-24 22:15:04.909142] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:25.912 [2024-07-24 22:15:04.909386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.912 [2024-07-24 22:15:04.909417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:25.912 [2024-07-24 22:15:04.918650] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:25.912 [2024-07-24 22:15:04.918880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.912 [2024-07-24 22:15:04.918906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:25.912 [2024-07-24 22:15:04.928193] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:25.912 [2024-07-24 22:15:04.928430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.912 [2024-07-24 22:15:04.928452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:25.912 [2024-07-24 22:15:04.937551] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:25.912 [2024-07-24 22:15:04.937788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.912 [2024-07-24 22:15:04.937808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:25.912 [2024-07-24 22:15:04.946770] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:25.912 [2024-07-24 22:15:04.947001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.912 [2024-07-24 22:15:04.947022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:25.912 [2024-07-24 22:15:04.955963] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:25.912 [2024-07-24 22:15:04.956193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.912 [2024-07-24 22:15:04.956213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:25.912 [2024-07-24 22:15:04.965161] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:25.912 [2024-07-24 22:15:04.965383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.912 [2024-07-24 22:15:04.965404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:25.912 [2024-07-24 22:15:04.974344] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:25.912 [2024-07-24 22:15:04.974571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.912 [2024-07-24 22:15:04.974591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:25.912 [2024-07-24 22:15:04.983512] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:25.912 [2024-07-24 22:15:04.983756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.912 [2024-07-24 22:15:04.983776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:25.912 [2024-07-24 22:15:04.992695] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:25.912 [2024-07-24 22:15:04.992929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.912 [2024-07-24 22:15:04.992949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:25.912 [2024-07-24 22:15:05.001824] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:25.912 [2024-07-24 22:15:05.002044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.912 [2024-07-24 22:15:05.002065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:25.912 [2024-07-24 22:15:05.010943] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:25.912 [2024-07-24 22:15:05.011189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.912 [2024-07-24 22:15:05.011209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:25.912 [2024-07-24 22:15:05.020090] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:25.912 [2024-07-24 22:15:05.020336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.912 [2024-07-24 22:15:05.020355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:25.912 [2024-07-24 22:15:05.029422] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:25.912 [2024-07-24 22:15:05.029668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.912 [2024-07-24 22:15:05.029688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:25.912 [2024-07-24 22:15:05.038692] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:25.912 [2024-07-24 22:15:05.038944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.912 [2024-07-24 22:15:05.038964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:25.912 [2024-07-24 22:15:05.047857] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:25.912 [2024-07-24 22:15:05.048081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.912 [2024-07-24 22:15:05.048101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:25.912 [2024-07-24 22:15:05.057059] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:25.912 [2024-07-24 22:15:05.057282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.912 [2024-07-24 22:15:05.057302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:25.912 [2024-07-24 22:15:05.066423] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:25.912 [2024-07-24 22:15:05.066648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.912 [2024-07-24 22:15:05.066671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:25.912 [2024-07-24 22:15:05.075850] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:25.913 [2024-07-24 22:15:05.076067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.913 [2024-07-24 22:15:05.076086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:25.913 [2024-07-24 22:15:05.085252] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:25.913 [2024-07-24 22:15:05.085472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.913 [2024-07-24 22:15:05.085492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:25.913 [2024-07-24 22:15:05.094824] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:25.913 [2024-07-24 22:15:05.095052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.913 [2024-07-24 22:15:05.095073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:25.913 [2024-07-24 22:15:05.104749] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:25.913 [2024-07-24 22:15:05.105004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.913 [2024-07-24 22:15:05.105025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:25.913 [2024-07-24 22:15:05.114746] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:25.913 [2024-07-24 22:15:05.114983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.913 [2024-07-24 22:15:05.115003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:25.913 [2024-07-24 22:15:05.124741] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.172 [2024-07-24 22:15:05.124978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.172 [2024-07-24 22:15:05.124998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.172 [2024-07-24 22:15:05.134423] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.172 [2024-07-24 22:15:05.134653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.172 [2024-07-24 22:15:05.134672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.172 [2024-07-24 22:15:05.143831] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.172 [2024-07-24 22:15:05.144063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.172 [2024-07-24 22:15:05.144083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.172 [2024-07-24 22:15:05.153272] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.172 [2024-07-24 22:15:05.153499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.172 [2024-07-24 22:15:05.153522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.172 [2024-07-24 22:15:05.162634] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.172 [2024-07-24 22:15:05.162865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.172 [2024-07-24 22:15:05.162884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.172 [2024-07-24 22:15:05.172135] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.172 [2024-07-24 22:15:05.172376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.172 [2024-07-24 22:15:05.172396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.172 [2024-07-24 22:15:05.181327] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.172 [2024-07-24 22:15:05.181547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.172 [2024-07-24 22:15:05.181566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.172 [2024-07-24 22:15:05.190633] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.172 [2024-07-24 22:15:05.190892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.172 [2024-07-24 22:15:05.190911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.172 [2024-07-24 22:15:05.199769] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.172 [2024-07-24 22:15:05.199996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.172 [2024-07-24 22:15:05.200015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.172 [2024-07-24 22:15:05.208882] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.172 [2024-07-24 22:15:05.209109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.172 [2024-07-24 22:15:05.209128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.172 [2024-07-24 22:15:05.218093] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.172 [2024-07-24 22:15:05.218339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.172 [2024-07-24 22:15:05.218359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.172 [2024-07-24 22:15:05.227287] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.172 [2024-07-24 22:15:05.227494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.172 [2024-07-24 22:15:05.227513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.172 [2024-07-24 22:15:05.236431] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.172 [2024-07-24 22:15:05.236649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.172 [2024-07-24 22:15:05.236669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.172 [2024-07-24 22:15:05.245840] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.172 [2024-07-24 22:15:05.246063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.172 [2024-07-24 22:15:05.246083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.172 [2024-07-24 22:15:05.255239] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.172 [2024-07-24 22:15:05.255469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.172 [2024-07-24 22:15:05.255487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.172 [2024-07-24 22:15:05.264531] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.172 [2024-07-24 22:15:05.264763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.173 [2024-07-24 22:15:05.264782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.173 [2024-07-24 22:15:05.273846] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.173 [2024-07-24 22:15:05.274091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.173 [2024-07-24 22:15:05.274111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.173 [2024-07-24 22:15:05.283106] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.173 [2024-07-24 22:15:05.283335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.173 [2024-07-24 22:15:05.283354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.173 [2024-07-24 22:15:05.292297] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.173 [2024-07-24 22:15:05.292542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.173 [2024-07-24 22:15:05.292561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.173 [2024-07-24 22:15:05.301459] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.173 [2024-07-24 22:15:05.301689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.173 [2024-07-24 22:15:05.301709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.173 [2024-07-24 22:15:05.310614] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.173 [2024-07-24 22:15:05.310849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.173 [2024-07-24 22:15:05.310869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.173 [2024-07-24 22:15:05.319831] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.173 [2024-07-24 22:15:05.320103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.173 [2024-07-24 22:15:05.320124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.173 [2024-07-24 22:15:05.329021] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.173 [2024-07-24 22:15:05.329233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.173 [2024-07-24 22:15:05.329252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.173 [2024-07-24 22:15:05.338162] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.173 [2024-07-24 22:15:05.338400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.173 [2024-07-24 22:15:05.338420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.173 [2024-07-24 22:15:05.347448] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.173 [2024-07-24 22:15:05.347676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.173 [2024-07-24 22:15:05.347697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.173 [2024-07-24 22:15:05.356667] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.173 [2024-07-24 22:15:05.356923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.173 [2024-07-24 22:15:05.356943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.173 [2024-07-24 22:15:05.365810] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.173 [2024-07-24 22:15:05.366042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.173 [2024-07-24 22:15:05.366062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.173 [2024-07-24 22:15:05.374932] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.173 [2024-07-24 22:15:05.375174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.173 [2024-07-24 22:15:05.375194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.173 [2024-07-24 22:15:05.384261] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.173 [2024-07-24 22:15:05.384484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.173 [2024-07-24 22:15:05.384504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.432 [2024-07-24 22:15:05.393662] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.432 [2024-07-24 22:15:05.393889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.432 [2024-07-24 22:15:05.393917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.432 [2024-07-24 22:15:05.403064] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.432 [2024-07-24 22:15:05.403287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.432 [2024-07-24 22:15:05.403307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.432 [2024-07-24 22:15:05.412461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.432 [2024-07-24 22:15:05.412695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.432 [2024-07-24 22:15:05.412719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.432 [2024-07-24 22:15:05.422017] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.432 [2024-07-24 22:15:05.422252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.432 [2024-07-24 22:15:05.422271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.432 [2024-07-24 22:15:05.431494] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.432 [2024-07-24 22:15:05.431727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.432 [2024-07-24 22:15:05.431747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.432 [2024-07-24 22:15:05.440659] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.432 [2024-07-24 22:15:05.440895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.432 [2024-07-24 22:15:05.440914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.432 [2024-07-24 22:15:05.449854] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.432 [2024-07-24 22:15:05.450091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.432 [2024-07-24 22:15:05.450112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.432 [2024-07-24 22:15:05.459059] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.432 [2024-07-24 22:15:05.459277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.432 [2024-07-24 22:15:05.459298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.432 [2024-07-24 22:15:05.468314] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.432 [2024-07-24 22:15:05.468537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.432 [2024-07-24 22:15:05.468557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.432 [2024-07-24 22:15:05.477689] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.432 [2024-07-24 22:15:05.477922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.432 [2024-07-24 22:15:05.477941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.432 [2024-07-24 22:15:05.486916] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.432 [2024-07-24 22:15:05.487158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.432 [2024-07-24 22:15:05.487177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.432 [2024-07-24 22:15:05.496080] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.432 [2024-07-24 22:15:05.496327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.432 [2024-07-24 22:15:05.496347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.432 [2024-07-24 22:15:05.505259] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.432 [2024-07-24 22:15:05.505488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.432 [2024-07-24 22:15:05.505507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.432 [2024-07-24 22:15:05.514398] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.432 [2024-07-24 22:15:05.514627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.432 [2024-07-24 22:15:05.514647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.432 [2024-07-24 22:15:05.523591] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.432 [2024-07-24 22:15:05.523809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.432 [2024-07-24 22:15:05.523829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.432 [2024-07-24 22:15:05.532829] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.432 [2024-07-24 22:15:05.533049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.433 [2024-07-24 22:15:05.533068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.433 [2024-07-24 22:15:05.541993] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.433 [2024-07-24 22:15:05.542208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.433 [2024-07-24 22:15:05.542227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.433 [2024-07-24 22:15:05.551203] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.433 [2024-07-24 22:15:05.551421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.433 [2024-07-24 22:15:05.551441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.433 [2024-07-24 22:15:05.560340] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.433 [2024-07-24 22:15:05.560560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.433 [2024-07-24 22:15:05.560579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.433 [2024-07-24 22:15:05.569584] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.433 [2024-07-24 22:15:05.569827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.433 [2024-07-24 22:15:05.569847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.433 [2024-07-24 22:15:05.578908] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.433 [2024-07-24 22:15:05.579130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.433 [2024-07-24 22:15:05.579150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.433 [2024-07-24 22:15:05.588161] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.433 [2024-07-24 22:15:05.588379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.433 [2024-07-24 22:15:05.588399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.433 [2024-07-24 22:15:05.597370] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.433 [2024-07-24 22:15:05.597608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.433 [2024-07-24 22:15:05.597627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.433 [2024-07-24 22:15:05.606560] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.433 [2024-07-24 22:15:05.606783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.433 [2024-07-24 22:15:05.606803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.433 [2024-07-24 22:15:05.615738] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.433 [2024-07-24 22:15:05.615956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.433 [2024-07-24 22:15:05.615976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.433 [2024-07-24 22:15:05.624935] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.433 [2024-07-24 22:15:05.625174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.433 [2024-07-24 22:15:05.625193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.433 [2024-07-24 22:15:05.634110] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.433 [2024-07-24 22:15:05.634338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.433 [2024-07-24 22:15:05.634357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.433 [2024-07-24 22:15:05.643283] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.433 [2024-07-24 22:15:05.643532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.433 [2024-07-24 22:15:05.643551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.693 [2024-07-24 22:15:05.652638] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.693 [2024-07-24 22:15:05.652889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.693 [2024-07-24 22:15:05.652909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.693 [2024-07-24 22:15:05.661842] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.693 [2024-07-24 22:15:05.662082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.693 [2024-07-24 22:15:05.662102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.693 [2024-07-24 22:15:05.671065] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.693 [2024-07-24 22:15:05.671297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.693 [2024-07-24 22:15:05.671316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.693 [2024-07-24 22:15:05.680445] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.693 [2024-07-24 22:15:05.680677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.693 [2024-07-24 22:15:05.680696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.693 [2024-07-24 22:15:05.689878] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.693 [2024-07-24 22:15:05.690124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.693 [2024-07-24 22:15:05.690144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.693 [2024-07-24 22:15:05.699063] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.693 [2024-07-24 22:15:05.699290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.693 [2024-07-24 22:15:05.699310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.693 [2024-07-24 22:15:05.708168] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.693 [2024-07-24 22:15:05.708397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.693 [2024-07-24 22:15:05.708417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.693 [2024-07-24 22:15:05.717323] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.693 [2024-07-24 22:15:05.717550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.693 [2024-07-24 22:15:05.717572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.693 [2024-07-24 22:15:05.726485] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.693 [2024-07-24 22:15:05.726732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.693 [2024-07-24 22:15:05.726752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.693 [2024-07-24 22:15:05.735659] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.693 [2024-07-24 22:15:05.735892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.694 [2024-07-24 22:15:05.735912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.694 [2024-07-24 22:15:05.744767] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.694 [2024-07-24 22:15:05.744996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.694 [2024-07-24 22:15:05.745015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.694 [2024-07-24 22:15:05.754014] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.694 [2024-07-24 22:15:05.754256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.694 [2024-07-24 22:15:05.754275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.694 [2024-07-24 22:15:05.763164] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.694 [2024-07-24 22:15:05.763388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.694 [2024-07-24 22:15:05.763408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.694 [2024-07-24 22:15:05.772280] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.694 [2024-07-24 22:15:05.772498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.694 [2024-07-24 22:15:05.772517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.694 [2024-07-24 22:15:05.781410] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.694 [2024-07-24 22:15:05.781623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.694 [2024-07-24 22:15:05.781642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.694 [2024-07-24 22:15:05.790559] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.694 [2024-07-24 22:15:05.790807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.694 [2024-07-24 22:15:05.790827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.694 [2024-07-24 22:15:05.799708] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.694 [2024-07-24 22:15:05.799945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.694 [2024-07-24 22:15:05.799965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.694 [2024-07-24 22:15:05.808811] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.694 [2024-07-24 22:15:05.809038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.694 [2024-07-24 22:15:05.809057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.694 [2024-07-24 22:15:05.817935] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.694 [2024-07-24 22:15:05.818165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.694 [2024-07-24 22:15:05.818185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.694 [2024-07-24 22:15:05.827083] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.694 [2024-07-24 22:15:05.827306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.694 [2024-07-24 22:15:05.827326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.694 [2024-07-24 22:15:05.836205] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.694 [2024-07-24 22:15:05.836436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.694 [2024-07-24 22:15:05.836455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.694 [2024-07-24 22:15:05.845303] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.694 [2024-07-24 22:15:05.845522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.694 [2024-07-24 22:15:05.845541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.694 [2024-07-24 22:15:05.854465] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.694 [2024-07-24 22:15:05.854707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.694 [2024-07-24 22:15:05.854731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.694 [2024-07-24 22:15:05.863607] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.694 [2024-07-24 22:15:05.863838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.694 [2024-07-24 22:15:05.863858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.694 [2024-07-24 22:15:05.872707] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.694 [2024-07-24 22:15:05.872930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.694 [2024-07-24 22:15:05.872950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.694 [2024-07-24 22:15:05.882148] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.694 [2024-07-24 22:15:05.882384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.694 [2024-07-24 22:15:05.882404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.694 [2024-07-24 22:15:05.891322] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.694 [2024-07-24 22:15:05.891550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.694 [2024-07-24 22:15:05.891570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.694 [2024-07-24 22:15:05.900434] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.694 [2024-07-24 22:15:05.900661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.694 [2024-07-24 22:15:05.900681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.954 [2024-07-24 22:15:05.909766] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.954 [2024-07-24 22:15:05.910011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.954 [2024-07-24 22:15:05.910030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.954 [2024-07-24 22:15:05.919068] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.954 [2024-07-24 22:15:05.919296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.954 [2024-07-24 22:15:05.919316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.954 [2024-07-24 22:15:05.928329] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.954 [2024-07-24 22:15:05.928565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.954 [2024-07-24 22:15:05.928584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.954 [2024-07-24 22:15:05.937744] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.954 [2024-07-24 22:15:05.937987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.954 [2024-07-24 22:15:05.938007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.954 [2024-07-24 22:15:05.946979] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.954 [2024-07-24 22:15:05.947215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.954 [2024-07-24 22:15:05.947235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.955 [2024-07-24 22:15:05.956042] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.955 [2024-07-24 22:15:05.956260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.955 [2024-07-24 22:15:05.956279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.955 [2024-07-24 22:15:05.965180] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.955 [2024-07-24 22:15:05.965397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.955 [2024-07-24 22:15:05.965416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.955 [2024-07-24 22:15:05.974310] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.955 [2024-07-24 22:15:05.974534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.955 [2024-07-24 22:15:05.974553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.955 [2024-07-24 22:15:05.983506] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.955 [2024-07-24 22:15:05.983742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.955 [2024-07-24 22:15:05.983762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.955 [2024-07-24 22:15:05.992641] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.955 [2024-07-24 22:15:05.992862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.955 [2024-07-24 22:15:05.992881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.955 [2024-07-24 22:15:06.001782] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.955 [2024-07-24 22:15:06.002009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.955 [2024-07-24 22:15:06.002030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.955 [2024-07-24 22:15:06.010965] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.955 [2024-07-24 22:15:06.011201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.955 [2024-07-24 22:15:06.011220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.955 [2024-07-24 22:15:06.020153] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.955 [2024-07-24 22:15:06.020376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.955 [2024-07-24 22:15:06.020395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.955 [2024-07-24 22:15:06.029265] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.955 [2024-07-24 22:15:06.029483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.955 [2024-07-24 22:15:06.029503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.955 [2024-07-24 22:15:06.038444] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.955 [2024-07-24 22:15:06.038668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.955 [2024-07-24 22:15:06.038690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.955 [2024-07-24 22:15:06.047685] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.955 [2024-07-24 22:15:06.047938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.955 [2024-07-24 22:15:06.047958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.955 [2024-07-24 22:15:06.056862] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.955 [2024-07-24 22:15:06.057092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.955 [2024-07-24 22:15:06.057111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.955 [2024-07-24 22:15:06.066023] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.955 [2024-07-24 22:15:06.066240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.955 [2024-07-24 22:15:06.066259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.955 [2024-07-24 22:15:06.075204] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.955 [2024-07-24 22:15:06.075440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.955 [2024-07-24 22:15:06.075459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.955 [2024-07-24 22:15:06.084382] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.955 [2024-07-24 22:15:06.084601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.955 [2024-07-24 22:15:06.084620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.955 [2024-07-24 22:15:06.093495] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.955 [2024-07-24 22:15:06.093736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.955 [2024-07-24 22:15:06.093755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.955 [2024-07-24 22:15:06.102661] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.955 [2024-07-24 22:15:06.102902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.955 [2024-07-24 22:15:06.102922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.955 [2024-07-24 22:15:06.111844] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.955 [2024-07-24 22:15:06.112062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.955 [2024-07-24 22:15:06.112081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.955 [2024-07-24 22:15:06.120966] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.955 [2024-07-24 22:15:06.121188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.955 [2024-07-24 22:15:06.121207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.955 [2024-07-24 22:15:06.130081] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.955 [2024-07-24 22:15:06.130298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.955 [2024-07-24 22:15:06.130318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.955 [2024-07-24 22:15:06.139265] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.955 [2024-07-24 22:15:06.139501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.955 [2024-07-24 22:15:06.139521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.955 [2024-07-24 22:15:06.148425] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.955 [2024-07-24 22:15:06.148643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.955 [2024-07-24 22:15:06.148662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.955 [2024-07-24 22:15:06.157631] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:26.955 [2024-07-24 22:15:06.157859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.955 [2024-07-24 22:15:06.157879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.955 [2024-07-24 22:15:06.167047] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.215 [2024-07-24 22:15:06.167279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.215 [2024-07-24 22:15:06.167300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.215 [2024-07-24 22:15:06.176422] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.215 [2024-07-24 22:15:06.176659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.215 [2024-07-24 22:15:06.176678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.215 [2024-07-24 22:15:06.185765] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.215 [2024-07-24 22:15:06.185999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.215 [2024-07-24 22:15:06.186018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.215 [2024-07-24 22:15:06.195282] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.215 [2024-07-24 22:15:06.195523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.215 [2024-07-24 22:15:06.195543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.215 [2024-07-24 22:15:06.204615] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.215 [2024-07-24 22:15:06.204868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.215 [2024-07-24 22:15:06.204890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.215 [2024-07-24 22:15:06.213810] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.215 [2024-07-24 22:15:06.214029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.215 [2024-07-24 22:15:06.214050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.215 [2024-07-24 22:15:06.222984] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.215 [2024-07-24 22:15:06.223221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.215 [2024-07-24 22:15:06.223241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.215 [2024-07-24 22:15:06.232165] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.215 [2024-07-24 22:15:06.232385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.215 [2024-07-24 22:15:06.232404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.215 [2024-07-24 22:15:06.241288] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.216 [2024-07-24 22:15:06.241508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.216 [2024-07-24 22:15:06.241527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.216 [2024-07-24 22:15:06.250367] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.216 [2024-07-24 22:15:06.250595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.216 [2024-07-24 22:15:06.250614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.216 [2024-07-24 22:15:06.259577] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.216 [2024-07-24 22:15:06.259821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.216 [2024-07-24 22:15:06.259840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.216 [2024-07-24 22:15:06.268750] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.216 [2024-07-24 22:15:06.268967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.216 [2024-07-24 22:15:06.268986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.216 [2024-07-24 22:15:06.277890] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.216 [2024-07-24 22:15:06.278108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.216 [2024-07-24 22:15:06.278127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.216 [2024-07-24 22:15:06.287065] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.216 [2024-07-24 22:15:06.287315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.216 [2024-07-24 22:15:06.287335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.216 [2024-07-24 22:15:06.296249] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.216 [2024-07-24 22:15:06.296467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.216 [2024-07-24 22:15:06.296487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.216 [2024-07-24 22:15:06.305385] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.216 [2024-07-24 22:15:06.305603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.216 [2024-07-24 22:15:06.305622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.216 [2024-07-24 22:15:06.314456] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.216 [2024-07-24 22:15:06.314683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.216 [2024-07-24 22:15:06.314702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.216 [2024-07-24 22:15:06.323670] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.216 [2024-07-24 22:15:06.323915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.216 [2024-07-24 22:15:06.323934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.216 [2024-07-24 22:15:06.332824] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.216 [2024-07-24 22:15:06.333049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.216 [2024-07-24 22:15:06.333067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.216 [2024-07-24 22:15:06.341995] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.216 [2024-07-24 22:15:06.342248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.216 [2024-07-24 22:15:06.342268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.216 [2024-07-24 22:15:06.351179] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.216 [2024-07-24 22:15:06.351397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.216 [2024-07-24 22:15:06.351416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.216 [2024-07-24 22:15:06.360313] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.216 [2024-07-24 22:15:06.360555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.216 [2024-07-24 22:15:06.360577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.216 [2024-07-24 22:15:06.369485] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.216 [2024-07-24 22:15:06.369723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.216 [2024-07-24 22:15:06.369742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.216 [2024-07-24 22:15:06.378609] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.216 [2024-07-24 22:15:06.378832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.216 [2024-07-24 22:15:06.378851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.216 [2024-07-24 22:15:06.387756] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.216 [2024-07-24 22:15:06.387993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.216 [2024-07-24 22:15:06.388013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.216 [2024-07-24 22:15:06.397068] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.216 [2024-07-24 22:15:06.397290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.216 [2024-07-24 22:15:06.397309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.216 [2024-07-24 22:15:06.406426] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.216 [2024-07-24 22:15:06.406658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.216 [2024-07-24 22:15:06.406677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.216 [2024-07-24 22:15:06.415832] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.216 [2024-07-24 22:15:06.416063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.216 [2024-07-24 22:15:06.416084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.216 [2024-07-24 22:15:06.425206] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.216 [2024-07-24 22:15:06.425436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.216 [2024-07-24 22:15:06.425456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.476 [2024-07-24 22:15:06.434606] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.476 [2024-07-24 22:15:06.434847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.476 [2024-07-24 22:15:06.434867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.476 [2024-07-24 22:15:06.443969] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.476 [2024-07-24 22:15:06.444192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.476 [2024-07-24 22:15:06.444211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.476 [2024-07-24 22:15:06.453506] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.476 [2024-07-24 22:15:06.453738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.476 [2024-07-24 22:15:06.453758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.476 [2024-07-24 22:15:06.462900] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.476 [2024-07-24 22:15:06.463130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.476 [2024-07-24 22:15:06.463150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.476 [2024-07-24 22:15:06.472306] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.476 [2024-07-24 22:15:06.472539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.476 [2024-07-24 22:15:06.472559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.476 [2024-07-24 22:15:06.481737] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.476 [2024-07-24 22:15:06.481956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.476 [2024-07-24 22:15:06.481976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.476 [2024-07-24 22:15:06.491127] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.476 [2024-07-24 22:15:06.491366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.476 [2024-07-24 22:15:06.491386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.476 [2024-07-24 22:15:06.500461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.476 [2024-07-24 22:15:06.500710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.477 [2024-07-24 22:15:06.500735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.477 [2024-07-24 22:15:06.509766] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.477 [2024-07-24 22:15:06.510002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.477 [2024-07-24 22:15:06.510022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.477 [2024-07-24 22:15:06.518952] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.477 [2024-07-24 22:15:06.519186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.477 [2024-07-24 22:15:06.519205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.477 [2024-07-24 22:15:06.528077] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.477 [2024-07-24 22:15:06.528306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.477 [2024-07-24 22:15:06.528325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.477 [2024-07-24 22:15:06.537223] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.477 [2024-07-24 22:15:06.537511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.477 [2024-07-24 22:15:06.537530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.477 [2024-07-24 22:15:06.546390] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.477 [2024-07-24 22:15:06.546602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.477 [2024-07-24 22:15:06.546620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.477 [2024-07-24 22:15:06.555570] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.477 [2024-07-24 22:15:06.555804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.477 [2024-07-24 22:15:06.555823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.477 [2024-07-24 22:15:06.564775] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.477 [2024-07-24 22:15:06.565024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.477 [2024-07-24 22:15:06.565044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.477 [2024-07-24 22:15:06.573937] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.477 [2024-07-24 22:15:06.574165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.477 [2024-07-24 22:15:06.574184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.477 [2024-07-24 22:15:06.583075] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.477 [2024-07-24 22:15:06.583312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.477 [2024-07-24 22:15:06.583331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.477 [2024-07-24 22:15:06.592288] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.477 [2024-07-24 22:15:06.592506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.477 [2024-07-24 22:15:06.592525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.477 [2024-07-24 22:15:06.601454] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.477 [2024-07-24 22:15:06.601675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.477 [2024-07-24 22:15:06.601694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.477 [2024-07-24 22:15:06.610655] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.477 [2024-07-24 22:15:06.610898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.477 [2024-07-24 22:15:06.610919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.477 [2024-07-24 22:15:06.619895] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.477 [2024-07-24 22:15:06.620113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.477 [2024-07-24 22:15:06.620132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.477 [2024-07-24 22:15:06.629076] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.477 [2024-07-24 22:15:06.629298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.477 [2024-07-24 22:15:06.629318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.477 [2024-07-24 22:15:06.638241] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.477 [2024-07-24 22:15:06.638451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.477 [2024-07-24 22:15:06.638471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.477 [2024-07-24 22:15:06.647408] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.477 [2024-07-24 22:15:06.647637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.477 [2024-07-24 22:15:06.647656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.477 [2024-07-24 22:15:06.656935] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.477 [2024-07-24 22:15:06.657148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.477 [2024-07-24 22:15:06.657167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.477 [2024-07-24 22:15:06.666094] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.477 [2024-07-24 22:15:06.666316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.477 [2024-07-24 22:15:06.666336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.477 [2024-07-24 22:15:06.675272] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.477 [2024-07-24 22:15:06.675490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.477 [2024-07-24 22:15:06.675510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.477 [2024-07-24 22:15:06.684527] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.477 [2024-07-24 22:15:06.684749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.477 [2024-07-24 22:15:06.684772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.737 [2024-07-24 22:15:06.693909] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.737 [2024-07-24 22:15:06.694148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.737 [2024-07-24 22:15:06.694167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.737 [2024-07-24 22:15:06.703326] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.737 [2024-07-24 22:15:06.703560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.737 [2024-07-24 22:15:06.703579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.737 [2024-07-24 22:15:06.712475] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.737 [2024-07-24 22:15:06.712725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.737 [2024-07-24 22:15:06.712744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.737 [2024-07-24 22:15:06.721665] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.737 [2024-07-24 22:15:06.721905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.737 [2024-07-24 22:15:06.721924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.737 [2024-07-24 22:15:06.730776] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.737 [2024-07-24 22:15:06.731001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.737 [2024-07-24 22:15:06.731021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.737 [2024-07-24 22:15:06.739976] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.737 [2024-07-24 22:15:06.740221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.737 [2024-07-24 22:15:06.740241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.737 [2024-07-24 22:15:06.749156] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.737 [2024-07-24 22:15:06.749382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.738 [2024-07-24 22:15:06.749401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.738 [2024-07-24 22:15:06.758375] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.738 [2024-07-24 22:15:06.758604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.738 [2024-07-24 22:15:06.758625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.738 [2024-07-24 22:15:06.767622] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.738 [2024-07-24 22:15:06.767859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.738 [2024-07-24 22:15:06.767878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.738 [2024-07-24 22:15:06.776866] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.738 [2024-07-24 22:15:06.777088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.738 [2024-07-24 22:15:06.777107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.738 [2024-07-24 22:15:06.786071] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.738 [2024-07-24 22:15:06.786288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.738 [2024-07-24 22:15:06.786308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.738 [2024-07-24 22:15:06.795258] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.738 [2024-07-24 22:15:06.795476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.738 [2024-07-24 22:15:06.795495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.738 [2024-07-24 22:15:06.804438] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.738 [2024-07-24 22:15:06.804674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.738 [2024-07-24 22:15:06.804694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.738 [2024-07-24 22:15:06.813658] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.738 [2024-07-24 22:15:06.813886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.738 [2024-07-24 22:15:06.813906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.738 [2024-07-24 22:15:06.822815] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.738 [2024-07-24 22:15:06.823034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.738 [2024-07-24 22:15:06.823054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.738 [2024-07-24 22:15:06.832034] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.738 [2024-07-24 22:15:06.832269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.738 [2024-07-24 22:15:06.832290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.738 [2024-07-24 22:15:06.841236] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.738 [2024-07-24 22:15:06.841455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.738 [2024-07-24 22:15:06.841474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.738 [2024-07-24 22:15:06.850399] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.738 [2024-07-24 22:15:06.850622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.738 [2024-07-24 22:15:06.850642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.738 [2024-07-24 22:15:06.859590] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.738 [2024-07-24 22:15:06.859816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.738 [2024-07-24 22:15:06.859842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.738 [2024-07-24 22:15:06.868734] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.738 [2024-07-24 22:15:06.868946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.738 [2024-07-24 22:15:06.868965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.738 [2024-07-24 22:15:06.877953] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.738 [2024-07-24 22:15:06.878170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.738 [2024-07-24 22:15:06.878189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.738 [2024-07-24 22:15:06.887139] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.738 [2024-07-24 22:15:06.887377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.738 [2024-07-24 22:15:06.887396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.738 [2024-07-24 22:15:06.896432] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.738 [2024-07-24 22:15:06.896652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.738 [2024-07-24 22:15:06.896671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.738 [2024-07-24 22:15:06.905800] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc3f810) with pdu=0x2000190fb048 00:27:27.738 [2024-07-24 22:15:06.906017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.738 [2024-07-24 22:15:06.906037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.738 00:27:27.738 Latency(us) 00:27:27.738 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:27.738 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:27.738 nvme0n1 : 2.00 27602.57 107.82 0.00 0.00 4628.18 1979.19 10066.33 00:27:27.738 =================================================================================================================== 00:27:27.738 Total : 27602.57 107.82 0.00 0.00 4628.18 1979.19 10066.33 00:27:27.738 0 00:27:27.738 22:15:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:27.738 22:15:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:27.738 22:15:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:27.738 22:15:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:27.738 | .driver_specific 00:27:27.738 | .nvme_error 00:27:27.738 | .status_code 00:27:27.738 | .command_transient_transport_error' 00:27:27.998 22:15:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 217 > 0 )) 00:27:27.998 22:15:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2840510 00:27:27.998 22:15:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2840510 ']' 00:27:27.998 22:15:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2840510 00:27:27.998 22:15:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:27:27.998 22:15:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:27.998 22:15:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2840510 00:27:27.998 22:15:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:27.998 22:15:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:27.998 22:15:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2840510' 00:27:27.998 killing process with pid 2840510 00:27:27.998 22:15:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2840510 00:27:27.998 Received shutdown signal, test time was about 2.000000 seconds 00:27:27.998 00:27:27.998 Latency(us) 00:27:27.998 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:27.998 =================================================================================================================== 00:27:27.998 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:27.998 22:15:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2840510 00:27:28.264 22:15:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:28.264 22:15:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:28.264 22:15:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:28.264 22:15:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:28.264 22:15:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:28.264 22:15:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2841632 00:27:28.264 22:15:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2841632 /var/tmp/bperf.sock 00:27:28.264 22:15:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:28.264 22:15:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2841632 ']' 00:27:28.264 22:15:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:28.264 22:15:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:28.264 22:15:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:28.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:28.264 22:15:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:28.264 22:15:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:28.264 [2024-07-24 22:15:07.390072] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:27:28.264 [2024-07-24 22:15:07.390125] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2841632 ] 00:27:28.264 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:28.264 Zero copy mechanism will not be used. 00:27:28.264 EAL: No free 2048 kB hugepages reported on node 1 00:27:28.264 [2024-07-24 22:15:07.459845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:28.523 [2024-07-24 22:15:07.533991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:29.092 22:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:29.092 22:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:27:29.092 22:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:29.092 22:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:29.351 22:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:29.351 22:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.351 22:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:29.351 22:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.351 22:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:29.351 22:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:29.610 nvme0n1 00:27:29.610 22:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:29.610 22:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.610 22:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:29.610 22:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.610 22:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:29.610 22:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:29.869 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:29.869 Zero copy mechanism will not be used. 00:27:29.869 Running I/O for 2 seconds... 00:27:29.869 [2024-07-24 22:15:08.875076] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:29.869 [2024-07-24 22:15:08.875534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.869 [2024-07-24 22:15:08.875561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:29.869 [2024-07-24 22:15:08.887273] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:29.869 [2024-07-24 22:15:08.887632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.869 [2024-07-24 22:15:08.887659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:29.869 [2024-07-24 22:15:08.895619] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:29.869 [2024-07-24 22:15:08.895989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.869 [2024-07-24 22:15:08.896012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:29.869 [2024-07-24 22:15:08.902731] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:29.869 [2024-07-24 22:15:08.903093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.869 [2024-07-24 22:15:08.903114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.869 [2024-07-24 22:15:08.909817] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:29.869 [2024-07-24 22:15:08.910163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.869 [2024-07-24 22:15:08.910184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:29.869 [2024-07-24 22:15:08.917596] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:29.869 [2024-07-24 22:15:08.917967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.869 [2024-07-24 22:15:08.917990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:29.869 [2024-07-24 22:15:08.924390] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:29.869 [2024-07-24 22:15:08.924739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.870 [2024-07-24 22:15:08.924760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:29.870 [2024-07-24 22:15:08.930699] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:29.870 [2024-07-24 22:15:08.930837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.870 [2024-07-24 22:15:08.930857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.870 [2024-07-24 22:15:08.938414] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:29.870 [2024-07-24 22:15:08.938779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.870 [2024-07-24 22:15:08.938799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:29.870 [2024-07-24 22:15:08.946919] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:29.870 [2024-07-24 22:15:08.947268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.870 [2024-07-24 22:15:08.947288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:29.870 [2024-07-24 22:15:08.954849] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:29.870 [2024-07-24 22:15:08.955189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.870 [2024-07-24 22:15:08.955209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:29.870 [2024-07-24 22:15:08.961666] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:29.870 [2024-07-24 22:15:08.962039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.870 [2024-07-24 22:15:08.962060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.870 [2024-07-24 22:15:08.968519] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:29.870 [2024-07-24 22:15:08.968869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.870 [2024-07-24 22:15:08.968891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:29.870 [2024-07-24 22:15:08.975512] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:29.870 [2024-07-24 22:15:08.975872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.870 [2024-07-24 22:15:08.975892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:29.870 [2024-07-24 22:15:08.982869] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:29.870 [2024-07-24 22:15:08.983224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.870 [2024-07-24 22:15:08.983243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:29.870 [2024-07-24 22:15:08.990423] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:29.870 [2024-07-24 22:15:08.990767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.870 [2024-07-24 22:15:08.990787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.870 [2024-07-24 22:15:08.997504] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:29.870 [2024-07-24 22:15:08.997865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.870 [2024-07-24 22:15:08.997886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:29.870 [2024-07-24 22:15:09.003828] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:29.870 [2024-07-24 22:15:09.004167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.870 [2024-07-24 22:15:09.004186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:29.870 [2024-07-24 22:15:09.011182] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:29.870 [2024-07-24 22:15:09.011533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.870 [2024-07-24 22:15:09.011553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:29.870 [2024-07-24 22:15:09.019167] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:29.870 [2024-07-24 22:15:09.019508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.870 [2024-07-24 22:15:09.019527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.870 [2024-07-24 22:15:09.025706] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:29.870 [2024-07-24 22:15:09.026046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.870 [2024-07-24 22:15:09.026066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:29.870 [2024-07-24 22:15:09.032099] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:29.870 [2024-07-24 22:15:09.032441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.870 [2024-07-24 22:15:09.032461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:29.870 [2024-07-24 22:15:09.038363] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:29.870 [2024-07-24 22:15:09.038699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.870 [2024-07-24 22:15:09.038727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:29.870 [2024-07-24 22:15:09.044970] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:29.870 [2024-07-24 22:15:09.045317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.870 [2024-07-24 22:15:09.045338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.870 [2024-07-24 22:15:09.051314] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:29.870 [2024-07-24 22:15:09.051660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.870 [2024-07-24 22:15:09.051679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:29.870 [2024-07-24 22:15:09.057793] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:29.870 [2024-07-24 22:15:09.058151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.870 [2024-07-24 22:15:09.058171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:29.870 [2024-07-24 22:15:09.064053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:29.870 [2024-07-24 22:15:09.064394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.870 [2024-07-24 22:15:09.064413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:29.870 [2024-07-24 22:15:09.070157] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:29.870 [2024-07-24 22:15:09.070288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.870 [2024-07-24 22:15:09.070310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.870 [2024-07-24 22:15:09.076447] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:29.870 [2024-07-24 22:15:09.076809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.870 [2024-07-24 22:15:09.076829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:30.130 [2024-07-24 22:15:09.083001] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.130 [2024-07-24 22:15:09.083354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.130 [2024-07-24 22:15:09.083374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:30.130 [2024-07-24 22:15:09.091142] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.131 [2024-07-24 22:15:09.091484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.131 [2024-07-24 22:15:09.091504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:30.131 [2024-07-24 22:15:09.097996] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.131 [2024-07-24 22:15:09.098354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.131 [2024-07-24 22:15:09.098374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.131 [2024-07-24 22:15:09.105183] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.131 [2024-07-24 22:15:09.105302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.131 [2024-07-24 22:15:09.105321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:30.131 [2024-07-24 22:15:09.112630] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.131 [2024-07-24 22:15:09.112966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.131 [2024-07-24 22:15:09.112986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:30.131 [2024-07-24 22:15:09.119620] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.131 [2024-07-24 22:15:09.119983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.131 [2024-07-24 22:15:09.120004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:30.131 [2024-07-24 22:15:09.126487] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.131 [2024-07-24 22:15:09.126838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.131 [2024-07-24 22:15:09.126859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.131 [2024-07-24 22:15:09.133517] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.131 [2024-07-24 22:15:09.133881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.131 [2024-07-24 22:15:09.133902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:30.131 [2024-07-24 22:15:09.140426] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.131 [2024-07-24 22:15:09.140788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.131 [2024-07-24 22:15:09.140808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:30.131 [2024-07-24 22:15:09.147811] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.131 [2024-07-24 22:15:09.148162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.131 [2024-07-24 22:15:09.148183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:30.131 [2024-07-24 22:15:09.156090] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.131 [2024-07-24 22:15:09.156436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.131 [2024-07-24 22:15:09.156456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.131 [2024-07-24 22:15:09.164930] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.131 [2024-07-24 22:15:09.165296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.131 [2024-07-24 22:15:09.165315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:30.131 [2024-07-24 22:15:09.174086] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.131 [2024-07-24 22:15:09.174445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.131 [2024-07-24 22:15:09.174464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:30.131 [2024-07-24 22:15:09.183012] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.131 [2024-07-24 22:15:09.183367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.131 [2024-07-24 22:15:09.183387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:30.131 [2024-07-24 22:15:09.191703] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.131 [2024-07-24 22:15:09.192071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.131 [2024-07-24 22:15:09.192091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.131 [2024-07-24 22:15:09.200664] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.131 [2024-07-24 22:15:09.201038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.131 [2024-07-24 22:15:09.201059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:30.131 [2024-07-24 22:15:09.210018] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.131 [2024-07-24 22:15:09.210389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.131 [2024-07-24 22:15:09.210410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:30.131 [2024-07-24 22:15:09.218052] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.131 [2024-07-24 22:15:09.218412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.131 [2024-07-24 22:15:09.218432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:30.131 [2024-07-24 22:15:09.224626] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.131 [2024-07-24 22:15:09.224974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.131 [2024-07-24 22:15:09.224994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.131 [2024-07-24 22:15:09.232213] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.131 [2024-07-24 22:15:09.232557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.131 [2024-07-24 22:15:09.232577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:30.131 [2024-07-24 22:15:09.238521] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.131 [2024-07-24 22:15:09.238631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.131 [2024-07-24 22:15:09.238650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:30.131 [2024-07-24 22:15:09.244898] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.131 [2024-07-24 22:15:09.245237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.131 [2024-07-24 22:15:09.245257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:30.131 [2024-07-24 22:15:09.251376] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.131 [2024-07-24 22:15:09.251729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.131 [2024-07-24 22:15:09.251749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.131 [2024-07-24 22:15:09.258028] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.131 [2024-07-24 22:15:09.258121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.131 [2024-07-24 22:15:09.258140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:30.131 [2024-07-24 22:15:09.264624] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.131 [2024-07-24 22:15:09.265006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.131 [2024-07-24 22:15:09.265029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:30.131 [2024-07-24 22:15:09.271528] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.131 [2024-07-24 22:15:09.271926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.131 [2024-07-24 22:15:09.271945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:30.131 [2024-07-24 22:15:09.278332] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.131 [2024-07-24 22:15:09.278695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.131 [2024-07-24 22:15:09.278722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.131 [2024-07-24 22:15:09.285704] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.131 [2024-07-24 22:15:09.286103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.131 [2024-07-24 22:15:09.286122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:30.131 [2024-07-24 22:15:09.293070] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.132 [2024-07-24 22:15:09.293427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.132 [2024-07-24 22:15:09.293446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:30.132 [2024-07-24 22:15:09.299760] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.132 [2024-07-24 22:15:09.300109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.132 [2024-07-24 22:15:09.300128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:30.132 [2024-07-24 22:15:09.306997] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.132 [2024-07-24 22:15:09.307626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.132 [2024-07-24 22:15:09.307646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.132 [2024-07-24 22:15:09.314152] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.132 [2024-07-24 22:15:09.314524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.132 [2024-07-24 22:15:09.314544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:30.132 [2024-07-24 22:15:09.320496] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.132 [2024-07-24 22:15:09.320817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.132 [2024-07-24 22:15:09.320837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:30.132 [2024-07-24 22:15:09.326247] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.132 [2024-07-24 22:15:09.326568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.132 [2024-07-24 22:15:09.326588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:30.132 [2024-07-24 22:15:09.332022] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.132 [2024-07-24 22:15:09.332347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.132 [2024-07-24 22:15:09.332367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.132 [2024-07-24 22:15:09.338033] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.132 [2024-07-24 22:15:09.338361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.132 [2024-07-24 22:15:09.338381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:30.392 [2024-07-24 22:15:09.343877] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.392 [2024-07-24 22:15:09.344208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.392 [2024-07-24 22:15:09.344228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:30.392 [2024-07-24 22:15:09.349711] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.392 [2024-07-24 22:15:09.350041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.392 [2024-07-24 22:15:09.350062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:30.392 [2024-07-24 22:15:09.356256] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.392 [2024-07-24 22:15:09.356569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.392 [2024-07-24 22:15:09.356590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.393 [2024-07-24 22:15:09.361927] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.393 [2024-07-24 22:15:09.362252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.393 [2024-07-24 22:15:09.362272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:30.393 [2024-07-24 22:15:09.367894] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.393 [2024-07-24 22:15:09.368208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.393 [2024-07-24 22:15:09.368228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:30.393 [2024-07-24 22:15:09.373879] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.393 [2024-07-24 22:15:09.374209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.393 [2024-07-24 22:15:09.374233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:30.393 [2024-07-24 22:15:09.379393] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.393 [2024-07-24 22:15:09.379724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.393 [2024-07-24 22:15:09.379743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.393 [2024-07-24 22:15:09.384987] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.393 [2024-07-24 22:15:09.385322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.393 [2024-07-24 22:15:09.385342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:30.393 [2024-07-24 22:15:09.390894] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.393 [2024-07-24 22:15:09.391223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.393 [2024-07-24 22:15:09.391242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:30.393 [2024-07-24 22:15:09.396551] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.393 [2024-07-24 22:15:09.396886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.393 [2024-07-24 22:15:09.396907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:30.393 [2024-07-24 22:15:09.402401] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.393 [2024-07-24 22:15:09.402734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.393 [2024-07-24 22:15:09.402754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.393 [2024-07-24 22:15:09.408052] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.393 [2024-07-24 22:15:09.408370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.393 [2024-07-24 22:15:09.408390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:30.393 [2024-07-24 22:15:09.414507] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.393 [2024-07-24 22:15:09.414918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.393 [2024-07-24 22:15:09.414938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:30.393 [2024-07-24 22:15:09.421110] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.393 [2024-07-24 22:15:09.421418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.393 [2024-07-24 22:15:09.421438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:30.393 [2024-07-24 22:15:09.427406] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.393 [2024-07-24 22:15:09.427745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.393 [2024-07-24 22:15:09.427765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.393 [2024-07-24 22:15:09.432922] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.393 [2024-07-24 22:15:09.433247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.393 [2024-07-24 22:15:09.433267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:30.393 [2024-07-24 22:15:09.438274] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.393 [2024-07-24 22:15:09.438603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.393 [2024-07-24 22:15:09.438623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:30.393 [2024-07-24 22:15:09.444024] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.393 [2024-07-24 22:15:09.444342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.393 [2024-07-24 22:15:09.444361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:30.393 [2024-07-24 22:15:09.450205] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.393 [2024-07-24 22:15:09.450527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.393 [2024-07-24 22:15:09.450547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.393 [2024-07-24 22:15:09.456777] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.393 [2024-07-24 22:15:09.457095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.393 [2024-07-24 22:15:09.457115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:30.393 [2024-07-24 22:15:09.463193] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.393 [2024-07-24 22:15:09.463514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.393 [2024-07-24 22:15:09.463534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:30.393 [2024-07-24 22:15:09.470301] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.393 [2024-07-24 22:15:09.470633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.393 [2024-07-24 22:15:09.470653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:30.393 [2024-07-24 22:15:09.476665] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.393 [2024-07-24 22:15:09.477011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.393 [2024-07-24 22:15:09.477031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.393 [2024-07-24 22:15:09.483069] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.393 [2024-07-24 22:15:09.483385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.393 [2024-07-24 22:15:09.483405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:30.393 [2024-07-24 22:15:09.490060] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.393 [2024-07-24 22:15:09.490388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.393 [2024-07-24 22:15:09.490408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:30.393 [2024-07-24 22:15:09.497187] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.393 [2024-07-24 22:15:09.497538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.393 [2024-07-24 22:15:09.497559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:30.393 [2024-07-24 22:15:09.503711] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.393 [2024-07-24 22:15:09.504077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.393 [2024-07-24 22:15:09.504096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.393 [2024-07-24 22:15:09.510529] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.393 [2024-07-24 22:15:09.510861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.393 [2024-07-24 22:15:09.510881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:30.393 [2024-07-24 22:15:09.517036] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.393 [2024-07-24 22:15:09.517359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.393 [2024-07-24 22:15:09.517379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:30.393 [2024-07-24 22:15:09.523546] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.393 [2024-07-24 22:15:09.523924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.394 [2024-07-24 22:15:09.523943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:30.394 [2024-07-24 22:15:09.530332] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.394 [2024-07-24 22:15:09.530657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.394 [2024-07-24 22:15:09.530677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.394 [2024-07-24 22:15:09.537107] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.394 [2024-07-24 22:15:09.537447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.394 [2024-07-24 22:15:09.537471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:30.394 [2024-07-24 22:15:09.543246] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.394 [2024-07-24 22:15:09.543573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.394 [2024-07-24 22:15:09.543593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:30.394 [2024-07-24 22:15:09.549558] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.394 [2024-07-24 22:15:09.549883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.394 [2024-07-24 22:15:09.549903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:30.394 [2024-07-24 22:15:09.555337] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.394 [2024-07-24 22:15:09.555661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.394 [2024-07-24 22:15:09.555681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.394 [2024-07-24 22:15:09.560809] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.394 [2024-07-24 22:15:09.561126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.394 [2024-07-24 22:15:09.561145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:30.394 [2024-07-24 22:15:09.566757] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.394 [2024-07-24 22:15:09.567088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.394 [2024-07-24 22:15:09.567108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:30.394 [2024-07-24 22:15:09.572309] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.394 [2024-07-24 22:15:09.572624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.394 [2024-07-24 22:15:09.572643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:30.394 [2024-07-24 22:15:09.578568] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.394 [2024-07-24 22:15:09.578923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.394 [2024-07-24 22:15:09.578943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.394 [2024-07-24 22:15:09.584286] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.394 [2024-07-24 22:15:09.584597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.394 [2024-07-24 22:15:09.584616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:30.394 [2024-07-24 22:15:09.590017] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.394 [2024-07-24 22:15:09.590341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.394 [2024-07-24 22:15:09.590360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:30.394 [2024-07-24 22:15:09.596606] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.394 [2024-07-24 22:15:09.596948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.394 [2024-07-24 22:15:09.596967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:30.394 [2024-07-24 22:15:09.602902] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.394 [2024-07-24 22:15:09.603227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.394 [2024-07-24 22:15:09.603247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.655 [2024-07-24 22:15:09.609139] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.655 [2024-07-24 22:15:09.609459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.655 [2024-07-24 22:15:09.609479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:30.655 [2024-07-24 22:15:09.615817] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.655 [2024-07-24 22:15:09.616154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.655 [2024-07-24 22:15:09.616173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:30.655 [2024-07-24 22:15:09.622005] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.655 [2024-07-24 22:15:09.622329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.655 [2024-07-24 22:15:09.622349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:30.655 [2024-07-24 22:15:09.628081] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.655 [2024-07-24 22:15:09.628405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.655 [2024-07-24 22:15:09.628425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.655 [2024-07-24 22:15:09.634296] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.655 [2024-07-24 22:15:09.634617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.655 [2024-07-24 22:15:09.634637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:30.655 [2024-07-24 22:15:09.640096] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.655 [2024-07-24 22:15:09.640426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.656 [2024-07-24 22:15:09.640446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:30.656 [2024-07-24 22:15:09.646304] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.656 [2024-07-24 22:15:09.646623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.656 [2024-07-24 22:15:09.646643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:30.656 [2024-07-24 22:15:09.652308] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.656 [2024-07-24 22:15:09.652614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.656 [2024-07-24 22:15:09.652635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.656 [2024-07-24 22:15:09.657744] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.656 [2024-07-24 22:15:09.658044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.656 [2024-07-24 22:15:09.658064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:30.656 [2024-07-24 22:15:09.663788] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.656 [2024-07-24 22:15:09.664085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.656 [2024-07-24 22:15:09.664108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:30.656 [2024-07-24 22:15:09.669747] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.656 [2024-07-24 22:15:09.670072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.656 [2024-07-24 22:15:09.670092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:30.656 [2024-07-24 22:15:09.675629] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.656 [2024-07-24 22:15:09.675956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.656 [2024-07-24 22:15:09.675976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.656 [2024-07-24 22:15:09.681799] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.656 [2024-07-24 22:15:09.682103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.656 [2024-07-24 22:15:09.682123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:30.656 [2024-07-24 22:15:09.687457] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.656 [2024-07-24 22:15:09.687759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.656 [2024-07-24 22:15:09.687779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:30.656 [2024-07-24 22:15:09.693098] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.656 [2024-07-24 22:15:09.693412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.656 [2024-07-24 22:15:09.693439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:30.656 [2024-07-24 22:15:09.698669] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.656 [2024-07-24 22:15:09.698995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.656 [2024-07-24 22:15:09.699015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.656 [2024-07-24 22:15:09.704463] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.656 [2024-07-24 22:15:09.704774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.656 [2024-07-24 22:15:09.704793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:30.656 [2024-07-24 22:15:09.710086] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.656 [2024-07-24 22:15:09.710430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.656 [2024-07-24 22:15:09.710449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:30.656 [2024-07-24 22:15:09.715675] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.656 [2024-07-24 22:15:09.715979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.656 [2024-07-24 22:15:09.716000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:30.656 [2024-07-24 22:15:09.721173] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.656 [2024-07-24 22:15:09.721478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.656 [2024-07-24 22:15:09.721498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.656 [2024-07-24 22:15:09.726906] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.656 [2024-07-24 22:15:09.727225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.656 [2024-07-24 22:15:09.727245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:30.656 [2024-07-24 22:15:09.733485] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.656 [2024-07-24 22:15:09.733784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.656 [2024-07-24 22:15:09.733804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:30.656 [2024-07-24 22:15:09.739181] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.656 [2024-07-24 22:15:09.739485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.656 [2024-07-24 22:15:09.739506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:30.656 [2024-07-24 22:15:09.745877] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.656 [2024-07-24 22:15:09.746261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.656 [2024-07-24 22:15:09.746282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.656 [2024-07-24 22:15:09.753399] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.656 [2024-07-24 22:15:09.753862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.656 [2024-07-24 22:15:09.753883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:30.656 [2024-07-24 22:15:09.761301] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.656 [2024-07-24 22:15:09.761566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.656 [2024-07-24 22:15:09.761585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:30.656 [2024-07-24 22:15:09.768151] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.656 [2024-07-24 22:15:09.768454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.656 [2024-07-24 22:15:09.768474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:30.656 [2024-07-24 22:15:09.774467] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.656 [2024-07-24 22:15:09.774781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.656 [2024-07-24 22:15:09.774801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.656 [2024-07-24 22:15:09.781730] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.656 [2024-07-24 22:15:09.782013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.656 [2024-07-24 22:15:09.782033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:30.656 [2024-07-24 22:15:09.787340] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.656 [2024-07-24 22:15:09.787600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.656 [2024-07-24 22:15:09.787620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:30.656 [2024-07-24 22:15:09.792533] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.656 [2024-07-24 22:15:09.792835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.656 [2024-07-24 22:15:09.792855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:30.656 [2024-07-24 22:15:09.798122] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.656 [2024-07-24 22:15:09.798389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.657 [2024-07-24 22:15:09.798408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.657 [2024-07-24 22:15:09.803470] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.657 [2024-07-24 22:15:09.803739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.657 [2024-07-24 22:15:09.803760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:30.657 [2024-07-24 22:15:09.809547] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.657 [2024-07-24 22:15:09.809898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.657 [2024-07-24 22:15:09.809918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:30.657 [2024-07-24 22:15:09.816109] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.657 [2024-07-24 22:15:09.816436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.657 [2024-07-24 22:15:09.816456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:30.657 [2024-07-24 22:15:09.823953] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.657 [2024-07-24 22:15:09.824298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.657 [2024-07-24 22:15:09.824319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.657 [2024-07-24 22:15:09.830288] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.657 [2024-07-24 22:15:09.830653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.657 [2024-07-24 22:15:09.830674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:30.657 [2024-07-24 22:15:09.838164] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.657 [2024-07-24 22:15:09.838571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.657 [2024-07-24 22:15:09.838591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:30.657 [2024-07-24 22:15:09.845635] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.657 [2024-07-24 22:15:09.845972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.657 [2024-07-24 22:15:09.845992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:30.657 [2024-07-24 22:15:09.853073] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.657 [2024-07-24 22:15:09.853426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.657 [2024-07-24 22:15:09.853447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.657 [2024-07-24 22:15:09.860603] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.657 [2024-07-24 22:15:09.860988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.657 [2024-07-24 22:15:09.861012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:30.918 [2024-07-24 22:15:09.868936] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.918 [2024-07-24 22:15:09.869304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.918 [2024-07-24 22:15:09.869325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:30.918 [2024-07-24 22:15:09.876927] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.918 [2024-07-24 22:15:09.877219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.918 [2024-07-24 22:15:09.877238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:30.918 [2024-07-24 22:15:09.885107] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.918 [2024-07-24 22:15:09.885518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.918 [2024-07-24 22:15:09.885539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.918 [2024-07-24 22:15:09.892862] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.918 [2024-07-24 22:15:09.893223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.918 [2024-07-24 22:15:09.893243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:30.918 [2024-07-24 22:15:09.901261] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.918 [2024-07-24 22:15:09.901667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.918 [2024-07-24 22:15:09.901686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:30.918 [2024-07-24 22:15:09.909218] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.918 [2024-07-24 22:15:09.909544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.918 [2024-07-24 22:15:09.909564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:30.918 [2024-07-24 22:15:09.918279] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.918 [2024-07-24 22:15:09.918582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.918 [2024-07-24 22:15:09.918602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.918 [2024-07-24 22:15:09.924247] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.918 [2024-07-24 22:15:09.924521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.918 [2024-07-24 22:15:09.924541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:30.918 [2024-07-24 22:15:09.929970] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.918 [2024-07-24 22:15:09.930241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.918 [2024-07-24 22:15:09.930261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:30.918 [2024-07-24 22:15:09.935866] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.918 [2024-07-24 22:15:09.936226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.918 [2024-07-24 22:15:09.936246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:30.918 [2024-07-24 22:15:09.941236] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.918 [2024-07-24 22:15:09.941500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.918 [2024-07-24 22:15:09.941520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.918 [2024-07-24 22:15:09.946672] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.918 [2024-07-24 22:15:09.947068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.918 [2024-07-24 22:15:09.947089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:30.918 [2024-07-24 22:15:09.952230] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.918 [2024-07-24 22:15:09.952505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.918 [2024-07-24 22:15:09.952525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:30.919 [2024-07-24 22:15:09.957900] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.919 [2024-07-24 22:15:09.958169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.919 [2024-07-24 22:15:09.958190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:30.919 [2024-07-24 22:15:09.964178] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.919 [2024-07-24 22:15:09.964441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.919 [2024-07-24 22:15:09.964461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.919 [2024-07-24 22:15:09.970604] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.919 [2024-07-24 22:15:09.970928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.919 [2024-07-24 22:15:09.970948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:30.919 [2024-07-24 22:15:09.976155] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.919 [2024-07-24 22:15:09.976419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.919 [2024-07-24 22:15:09.976438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:30.919 [2024-07-24 22:15:09.981773] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.919 [2024-07-24 22:15:09.982031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.919 [2024-07-24 22:15:09.982050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:30.919 [2024-07-24 22:15:09.987869] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.919 [2024-07-24 22:15:09.988197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.919 [2024-07-24 22:15:09.988216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.919 [2024-07-24 22:15:09.994109] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.919 [2024-07-24 22:15:09.994372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.919 [2024-07-24 22:15:09.994392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:30.919 [2024-07-24 22:15:09.999539] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.919 [2024-07-24 22:15:09.999826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.919 [2024-07-24 22:15:09.999846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:30.919 [2024-07-24 22:15:10.005765] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.919 [2024-07-24 22:15:10.006126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.919 [2024-07-24 22:15:10.006146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:30.919 [2024-07-24 22:15:10.012745] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.919 [2024-07-24 22:15:10.013023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.919 [2024-07-24 22:15:10.013044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.919 [2024-07-24 22:15:10.019875] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.919 [2024-07-24 22:15:10.020163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.919 [2024-07-24 22:15:10.020186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:30.919 [2024-07-24 22:15:10.026912] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.919 [2024-07-24 22:15:10.027219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.919 [2024-07-24 22:15:10.027240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:30.919 [2024-07-24 22:15:10.033055] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.919 [2024-07-24 22:15:10.033327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.919 [2024-07-24 22:15:10.033351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:30.919 [2024-07-24 22:15:10.039758] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.919 [2024-07-24 22:15:10.040024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.919 [2024-07-24 22:15:10.040044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.919 [2024-07-24 22:15:10.046261] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.919 [2024-07-24 22:15:10.046531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.919 [2024-07-24 22:15:10.046553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:30.919 [2024-07-24 22:15:10.052397] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.919 [2024-07-24 22:15:10.052676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.919 [2024-07-24 22:15:10.052697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:30.919 [2024-07-24 22:15:10.059461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.919 [2024-07-24 22:15:10.059792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.919 [2024-07-24 22:15:10.059812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:30.919 [2024-07-24 22:15:10.065240] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.919 [2024-07-24 22:15:10.065502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.919 [2024-07-24 22:15:10.065522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.919 [2024-07-24 22:15:10.072814] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.919 [2024-07-24 22:15:10.073091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.919 [2024-07-24 22:15:10.073114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:30.919 [2024-07-24 22:15:10.078304] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.919 [2024-07-24 22:15:10.078658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.919 [2024-07-24 22:15:10.078678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:30.919 [2024-07-24 22:15:10.084126] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.919 [2024-07-24 22:15:10.084416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.919 [2024-07-24 22:15:10.084436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:30.919 [2024-07-24 22:15:10.089313] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.919 [2024-07-24 22:15:10.089592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.919 [2024-07-24 22:15:10.089612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.919 [2024-07-24 22:15:10.094785] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.919 [2024-07-24 22:15:10.095101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.919 [2024-07-24 22:15:10.095122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:30.919 [2024-07-24 22:15:10.100343] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.919 [2024-07-24 22:15:10.100694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.919 [2024-07-24 22:15:10.100720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:30.919 [2024-07-24 22:15:10.106238] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.919 [2024-07-24 22:15:10.106553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.919 [2024-07-24 22:15:10.106574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:30.919 [2024-07-24 22:15:10.111894] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.919 [2024-07-24 22:15:10.112227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.919 [2024-07-24 22:15:10.112248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.919 [2024-07-24 22:15:10.118305] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.919 [2024-07-24 22:15:10.118650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.919 [2024-07-24 22:15:10.118671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:30.920 [2024-07-24 22:15:10.125603] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:30.920 [2024-07-24 22:15:10.125932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.920 [2024-07-24 22:15:10.125952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:31.179 [2024-07-24 22:15:10.133504] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.179 [2024-07-24 22:15:10.133848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.179 [2024-07-24 22:15:10.133869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:31.179 [2024-07-24 22:15:10.140498] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.179 [2024-07-24 22:15:10.140837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.179 [2024-07-24 22:15:10.140858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:31.179 [2024-07-24 22:15:10.147950] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.179 [2024-07-24 22:15:10.148325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.179 [2024-07-24 22:15:10.148346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:31.179 [2024-07-24 22:15:10.155583] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.179 [2024-07-24 22:15:10.155950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.179 [2024-07-24 22:15:10.155971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:31.179 [2024-07-24 22:15:10.163413] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.179 [2024-07-24 22:15:10.163789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.179 [2024-07-24 22:15:10.163811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:31.179 [2024-07-24 22:15:10.171387] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.179 [2024-07-24 22:15:10.171758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.179 [2024-07-24 22:15:10.171780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:31.179 [2024-07-24 22:15:10.179325] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.179 [2024-07-24 22:15:10.179719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.179 [2024-07-24 22:15:10.179740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:31.179 [2024-07-24 22:15:10.186854] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.179 [2024-07-24 22:15:10.187191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.179 [2024-07-24 22:15:10.187212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:31.179 [2024-07-24 22:15:10.194409] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.179 [2024-07-24 22:15:10.194788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.179 [2024-07-24 22:15:10.194809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:31.179 [2024-07-24 22:15:10.202101] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.179 [2024-07-24 22:15:10.202482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.179 [2024-07-24 22:15:10.202503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:31.179 [2024-07-24 22:15:10.210113] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.179 [2024-07-24 22:15:10.210438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.179 [2024-07-24 22:15:10.210462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:31.179 [2024-07-24 22:15:10.217900] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.180 [2024-07-24 22:15:10.218238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.180 [2024-07-24 22:15:10.218258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:31.180 [2024-07-24 22:15:10.224829] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.180 [2024-07-24 22:15:10.225124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.180 [2024-07-24 22:15:10.225144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:31.180 [2024-07-24 22:15:10.231509] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.180 [2024-07-24 22:15:10.231801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.180 [2024-07-24 22:15:10.231822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:31.180 [2024-07-24 22:15:10.248153] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.180 [2024-07-24 22:15:10.248739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.180 [2024-07-24 22:15:10.248761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:31.180 [2024-07-24 22:15:10.257614] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.180 [2024-07-24 22:15:10.258028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.180 [2024-07-24 22:15:10.258049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:31.180 [2024-07-24 22:15:10.265397] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.180 [2024-07-24 22:15:10.265707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.180 [2024-07-24 22:15:10.265732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:31.180 [2024-07-24 22:15:10.270866] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.180 [2024-07-24 22:15:10.271145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.180 [2024-07-24 22:15:10.271166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:31.180 [2024-07-24 22:15:10.276274] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.180 [2024-07-24 22:15:10.276571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.180 [2024-07-24 22:15:10.276592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:31.180 [2024-07-24 22:15:10.282701] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.180 [2024-07-24 22:15:10.283007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.180 [2024-07-24 22:15:10.283027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:31.180 [2024-07-24 22:15:10.288410] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.180 [2024-07-24 22:15:10.288680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.180 [2024-07-24 22:15:10.288700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:31.180 [2024-07-24 22:15:10.294699] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.180 [2024-07-24 22:15:10.295162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.180 [2024-07-24 22:15:10.295182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:31.180 [2024-07-24 22:15:10.300680] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.180 [2024-07-24 22:15:10.300951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.180 [2024-07-24 22:15:10.300972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:31.180 [2024-07-24 22:15:10.307257] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.180 [2024-07-24 22:15:10.307542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.180 [2024-07-24 22:15:10.307562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:31.180 [2024-07-24 22:15:10.314877] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.180 [2024-07-24 22:15:10.315146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.180 [2024-07-24 22:15:10.315166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:31.180 [2024-07-24 22:15:10.322230] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.180 [2024-07-24 22:15:10.322505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.180 [2024-07-24 22:15:10.322526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:31.180 [2024-07-24 22:15:10.329508] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.180 [2024-07-24 22:15:10.329831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.180 [2024-07-24 22:15:10.329852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:31.180 [2024-07-24 22:15:10.335549] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.180 [2024-07-24 22:15:10.335832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.180 [2024-07-24 22:15:10.335852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:31.180 [2024-07-24 22:15:10.341202] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.180 [2024-07-24 22:15:10.341464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.180 [2024-07-24 22:15:10.341485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:31.180 [2024-07-24 22:15:10.346542] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.180 [2024-07-24 22:15:10.346808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.180 [2024-07-24 22:15:10.346829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:31.180 [2024-07-24 22:15:10.352720] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.180 [2024-07-24 22:15:10.353063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.180 [2024-07-24 22:15:10.353083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:31.180 [2024-07-24 22:15:10.359179] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.180 [2024-07-24 22:15:10.359507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.180 [2024-07-24 22:15:10.359528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:31.180 [2024-07-24 22:15:10.365298] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.180 [2024-07-24 22:15:10.365579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.180 [2024-07-24 22:15:10.365600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:31.180 [2024-07-24 22:15:10.372077] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.180 [2024-07-24 22:15:10.372362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.180 [2024-07-24 22:15:10.372383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:31.180 [2024-07-24 22:15:10.380359] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.180 [2024-07-24 22:15:10.380707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.180 [2024-07-24 22:15:10.380733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:31.180 [2024-07-24 22:15:10.388583] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.180 [2024-07-24 22:15:10.388978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.180 [2024-07-24 22:15:10.388999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:31.440 [2024-07-24 22:15:10.397343] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.440 [2024-07-24 22:15:10.397739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.440 [2024-07-24 22:15:10.397763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:31.440 [2024-07-24 22:15:10.406016] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.440 [2024-07-24 22:15:10.406332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.440 [2024-07-24 22:15:10.406353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:31.440 [2024-07-24 22:15:10.414739] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.440 [2024-07-24 22:15:10.414998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.440 [2024-07-24 22:15:10.415019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:31.440 [2024-07-24 22:15:10.422299] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.440 [2024-07-24 22:15:10.422611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.440 [2024-07-24 22:15:10.422633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:31.440 [2024-07-24 22:15:10.429294] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.440 [2024-07-24 22:15:10.429562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.440 [2024-07-24 22:15:10.429583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:31.440 [2024-07-24 22:15:10.441438] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.440 [2024-07-24 22:15:10.441986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.440 [2024-07-24 22:15:10.442007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:31.440 [2024-07-24 22:15:10.453783] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.440 [2024-07-24 22:15:10.454239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.440 [2024-07-24 22:15:10.454259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:31.440 [2024-07-24 22:15:10.461440] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.440 [2024-07-24 22:15:10.461722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.440 [2024-07-24 22:15:10.461743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:31.440 [2024-07-24 22:15:10.468031] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.440 [2024-07-24 22:15:10.468298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.440 [2024-07-24 22:15:10.468318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:31.440 [2024-07-24 22:15:10.475407] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.440 [2024-07-24 22:15:10.475750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.441 [2024-07-24 22:15:10.475787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:31.441 [2024-07-24 22:15:10.481253] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.441 [2024-07-24 22:15:10.481543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.441 [2024-07-24 22:15:10.481563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:31.441 [2024-07-24 22:15:10.487904] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.441 [2024-07-24 22:15:10.488169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.441 [2024-07-24 22:15:10.488190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:31.441 [2024-07-24 22:15:10.493668] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.441 [2024-07-24 22:15:10.493922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.441 [2024-07-24 22:15:10.493943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:31.441 [2024-07-24 22:15:10.499084] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.441 [2024-07-24 22:15:10.499370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.441 [2024-07-24 22:15:10.499390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:31.441 [2024-07-24 22:15:10.505196] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.441 [2024-07-24 22:15:10.505530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.441 [2024-07-24 22:15:10.505551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:31.441 [2024-07-24 22:15:10.518711] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.441 [2024-07-24 22:15:10.519187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.441 [2024-07-24 22:15:10.519208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:31.441 [2024-07-24 22:15:10.528100] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.441 [2024-07-24 22:15:10.528420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.441 [2024-07-24 22:15:10.528440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:31.441 [2024-07-24 22:15:10.535428] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.441 [2024-07-24 22:15:10.535708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.441 [2024-07-24 22:15:10.535737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:31.441 [2024-07-24 22:15:10.549384] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.441 [2024-07-24 22:15:10.549833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.441 [2024-07-24 22:15:10.549854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:31.441 [2024-07-24 22:15:10.559296] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.441 [2024-07-24 22:15:10.559670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.441 [2024-07-24 22:15:10.559690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:31.441 [2024-07-24 22:15:10.567154] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.441 [2024-07-24 22:15:10.567527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.441 [2024-07-24 22:15:10.567548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:31.441 [2024-07-24 22:15:10.575490] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.441 [2024-07-24 22:15:10.575816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.441 [2024-07-24 22:15:10.575837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:31.441 [2024-07-24 22:15:10.589975] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.441 [2024-07-24 22:15:10.590427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.441 [2024-07-24 22:15:10.590448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:31.441 [2024-07-24 22:15:10.601124] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.441 [2024-07-24 22:15:10.601469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.441 [2024-07-24 22:15:10.601490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:31.441 [2024-07-24 22:15:10.609652] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.441 [2024-07-24 22:15:10.609981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.441 [2024-07-24 22:15:10.610002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:31.441 [2024-07-24 22:15:10.616908] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.441 [2024-07-24 22:15:10.617292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.441 [2024-07-24 22:15:10.617312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:31.441 [2024-07-24 22:15:10.624957] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.441 [2024-07-24 22:15:10.625306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.441 [2024-07-24 22:15:10.625326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:31.441 [2024-07-24 22:15:10.632994] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.441 [2024-07-24 22:15:10.633371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.441 [2024-07-24 22:15:10.633392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:31.441 [2024-07-24 22:15:10.640646] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.441 [2024-07-24 22:15:10.640937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.441 [2024-07-24 22:15:10.640958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:31.441 [2024-07-24 22:15:10.648026] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.441 [2024-07-24 22:15:10.648294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.441 [2024-07-24 22:15:10.648315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:31.702 [2024-07-24 22:15:10.656238] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.702 [2024-07-24 22:15:10.656584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.702 [2024-07-24 22:15:10.656605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:31.702 [2024-07-24 22:15:10.664076] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.702 [2024-07-24 22:15:10.664443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.702 [2024-07-24 22:15:10.664463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:31.702 [2024-07-24 22:15:10.672293] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.702 [2024-07-24 22:15:10.672602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.702 [2024-07-24 22:15:10.672623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:31.702 [2024-07-24 22:15:10.680004] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.702 [2024-07-24 22:15:10.680387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.702 [2024-07-24 22:15:10.680408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:31.702 [2024-07-24 22:15:10.687902] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.702 [2024-07-24 22:15:10.688290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.702 [2024-07-24 22:15:10.688311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:31.702 [2024-07-24 22:15:10.695538] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.702 [2024-07-24 22:15:10.695918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.702 [2024-07-24 22:15:10.695938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:31.702 [2024-07-24 22:15:10.703885] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.702 [2024-07-24 22:15:10.704263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.702 [2024-07-24 22:15:10.704283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:31.702 [2024-07-24 22:15:10.716761] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.702 [2024-07-24 22:15:10.717411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.702 [2024-07-24 22:15:10.717431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:31.702 [2024-07-24 22:15:10.731710] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.702 [2024-07-24 22:15:10.732092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.702 [2024-07-24 22:15:10.732113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:31.702 [2024-07-24 22:15:10.739817] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.702 [2024-07-24 22:15:10.740170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.702 [2024-07-24 22:15:10.740191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:31.702 [2024-07-24 22:15:10.745439] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.702 [2024-07-24 22:15:10.745702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.702 [2024-07-24 22:15:10.745727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:31.702 [2024-07-24 22:15:10.751484] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.702 [2024-07-24 22:15:10.751750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.702 [2024-07-24 22:15:10.751770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:31.702 [2024-07-24 22:15:10.756910] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.702 [2024-07-24 22:15:10.757214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.702 [2024-07-24 22:15:10.757235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:31.702 [2024-07-24 22:15:10.762962] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.702 [2024-07-24 22:15:10.763253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.702 [2024-07-24 22:15:10.763277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:31.702 [2024-07-24 22:15:10.768994] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.702 [2024-07-24 22:15:10.769262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.702 [2024-07-24 22:15:10.769282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:31.702 [2024-07-24 22:15:10.774540] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.702 [2024-07-24 22:15:10.774811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.702 [2024-07-24 22:15:10.774831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:31.702 [2024-07-24 22:15:10.780451] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.702 [2024-07-24 22:15:10.780805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.702 [2024-07-24 22:15:10.780825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:31.702 [2024-07-24 22:15:10.786928] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.702 [2024-07-24 22:15:10.787230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.702 [2024-07-24 22:15:10.787252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:31.702 [2024-07-24 22:15:10.795081] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.702 [2024-07-24 22:15:10.795556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.702 [2024-07-24 22:15:10.795577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:31.702 [2024-07-24 22:15:10.802162] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.702 [2024-07-24 22:15:10.802429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.702 [2024-07-24 22:15:10.802451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:31.702 [2024-07-24 22:15:10.809823] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.702 [2024-07-24 22:15:10.810186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.702 [2024-07-24 22:15:10.810207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:31.702 [2024-07-24 22:15:10.817371] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.702 [2024-07-24 22:15:10.817668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.702 [2024-07-24 22:15:10.817688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:31.702 [2024-07-24 22:15:10.825914] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.702 [2024-07-24 22:15:10.826266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.702 [2024-07-24 22:15:10.826286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:31.702 [2024-07-24 22:15:10.834297] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.702 [2024-07-24 22:15:10.834690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.702 [2024-07-24 22:15:10.834711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:31.702 [2024-07-24 22:15:10.844292] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.702 [2024-07-24 22:15:10.844660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.702 [2024-07-24 22:15:10.844681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:31.702 [2024-07-24 22:15:10.853053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc41490) with pdu=0x2000190fef90 00:27:31.703 [2024-07-24 22:15:10.853361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.703 [2024-07-24 22:15:10.853387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:31.703 00:27:31.703 Latency(us) 00:27:31.703 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:31.703 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:31.703 nvme0n1 : 2.00 4368.99 546.12 0.00 0.00 3656.72 2241.33 19503.51 00:27:31.703 =================================================================================================================== 00:27:31.703 Total : 4368.99 546.12 0.00 0.00 3656.72 2241.33 19503.51 00:27:31.703 0 00:27:31.703 22:15:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:31.703 22:15:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:31.703 22:15:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:31.703 22:15:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:31.703 | .driver_specific 00:27:31.703 | .nvme_error 00:27:31.703 | .status_code 00:27:31.703 | .command_transient_transport_error' 00:27:31.962 22:15:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 282 > 0 )) 00:27:31.962 22:15:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2841632 00:27:31.962 22:15:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2841632 ']' 00:27:31.962 22:15:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2841632 00:27:31.962 22:15:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:27:31.962 22:15:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:31.962 22:15:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2841632 00:27:31.962 22:15:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:31.962 22:15:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:31.962 22:15:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2841632' 00:27:31.962 killing process with pid 2841632 00:27:31.962 22:15:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2841632 00:27:31.962 Received shutdown signal, test time was about 2.000000 seconds 00:27:31.962 00:27:31.962 Latency(us) 00:27:31.962 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:31.963 =================================================================================================================== 00:27:31.963 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:31.963 22:15:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2841632 00:27:32.222 22:15:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2838933 00:27:32.222 22:15:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2838933 ']' 00:27:32.222 22:15:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2838933 00:27:32.222 22:15:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:27:32.222 22:15:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:32.222 22:15:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2838933 00:27:32.222 22:15:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:32.222 22:15:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:32.222 22:15:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2838933' 00:27:32.222 killing process with pid 2838933 00:27:32.222 22:15:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2838933 00:27:32.222 22:15:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2838933 00:27:32.481 00:27:32.481 real 0m16.718s 00:27:32.481 user 0m31.507s 00:27:32.481 sys 0m4.963s 00:27:32.481 22:15:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:32.481 22:15:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:32.481 ************************************ 00:27:32.481 END TEST nvmf_digest_error 00:27:32.481 ************************************ 00:27:32.481 22:15:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:32.481 22:15:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:32.481 22:15:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:32.481 22:15:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:27:32.481 22:15:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:32.481 22:15:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:27:32.481 22:15:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:32.481 22:15:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:32.481 rmmod nvme_tcp 00:27:32.481 rmmod nvme_fabrics 00:27:32.481 rmmod nvme_keyring 00:27:32.481 22:15:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:32.481 22:15:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:27:32.481 22:15:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:27:32.481 22:15:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 2838933 ']' 00:27:32.481 22:15:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 2838933 00:27:32.481 22:15:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 2838933 ']' 00:27:32.481 22:15:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 2838933 00:27:32.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2838933) - No such process 00:27:32.481 22:15:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 2838933 is not found' 00:27:32.481 Process with pid 2838933 is not found 00:27:32.481 22:15:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:32.481 22:15:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:32.481 22:15:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:32.481 22:15:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:32.481 22:15:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:32.481 22:15:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:32.481 22:15:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:32.481 22:15:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:35.017 00:27:35.017 real 0m42.371s 00:27:35.017 user 1m5.075s 00:27:35.017 sys 0m15.054s 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:35.017 ************************************ 00:27:35.017 END TEST nvmf_digest 00:27:35.017 ************************************ 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.017 ************************************ 00:27:35.017 START TEST nvmf_bdevperf 00:27:35.017 ************************************ 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:35.017 * Looking for test storage... 00:27:35.017 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:27:35.017 22:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:41.588 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:41.588 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:27:41.588 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:41.588 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:41.588 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:41.588 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:41.588 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:41.588 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:27:41.588 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:41.588 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:27:41.588 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:27:41.588 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:27:41.588 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:27:41.588 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:27:41.588 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:27:41.588 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:41.588 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:41.588 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:41.588 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:41.588 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:41.588 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:41.588 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:41.588 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:41.588 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:41.588 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:41.588 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:41.588 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:41.588 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:41.588 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:41.588 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:41.589 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:41.589 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:41.589 Found net devices under 0000:af:00.0: cvl_0_0 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:41.589 Found net devices under 0000:af:00.1: cvl_0_1 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:41.589 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:41.589 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:27:41.589 00:27:41.589 --- 10.0.0.2 ping statistics --- 00:27:41.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.589 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:41.589 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:41.589 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:27:41.589 00:27:41.589 --- 10.0.0.1 ping statistics --- 00:27:41.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.589 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2845863 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2845863 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 2845863 ']' 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:41.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:41.589 22:15:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:41.589 [2024-07-24 22:15:20.415757] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:27:41.589 [2024-07-24 22:15:20.415807] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:41.589 EAL: No free 2048 kB hugepages reported on node 1 00:27:41.589 [2024-07-24 22:15:20.488457] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:41.589 [2024-07-24 22:15:20.560983] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:41.589 [2024-07-24 22:15:20.561020] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:41.589 [2024-07-24 22:15:20.561029] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:41.589 [2024-07-24 22:15:20.561037] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:41.589 [2024-07-24 22:15:20.561044] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:41.589 [2024-07-24 22:15:20.561146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:41.589 [2024-07-24 22:15:20.561229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:41.589 [2024-07-24 22:15:20.561231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:42.156 22:15:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:42.156 22:15:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:27:42.156 22:15:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:42.156 22:15:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:42.156 22:15:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:42.156 22:15:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:42.156 22:15:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:42.156 22:15:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.156 22:15:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:42.156 [2024-07-24 22:15:21.275971] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:42.156 22:15:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.156 22:15:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:42.156 22:15:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.156 22:15:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:42.156 Malloc0 00:27:42.156 22:15:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.156 22:15:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:42.156 22:15:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.156 22:15:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:42.156 22:15:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.156 22:15:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:42.156 22:15:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.156 22:15:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:42.156 22:15:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.156 22:15:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:42.156 22:15:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.156 22:15:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:42.156 [2024-07-24 22:15:21.340234] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:42.156 22:15:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.156 22:15:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:27:42.156 22:15:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:27:42.156 22:15:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:27:42.156 22:15:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:27:42.156 22:15:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:42.156 22:15:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:42.156 { 00:27:42.156 "params": { 00:27:42.156 "name": "Nvme$subsystem", 00:27:42.156 "trtype": "$TEST_TRANSPORT", 00:27:42.156 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:42.156 "adrfam": "ipv4", 00:27:42.156 "trsvcid": "$NVMF_PORT", 00:27:42.156 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:42.156 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:42.156 "hdgst": ${hdgst:-false}, 00:27:42.156 "ddgst": ${ddgst:-false} 00:27:42.156 }, 00:27:42.156 "method": "bdev_nvme_attach_controller" 00:27:42.156 } 00:27:42.156 EOF 00:27:42.156 )") 00:27:42.156 22:15:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:27:42.156 22:15:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:27:42.156 22:15:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:27:42.156 22:15:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:42.156 "params": { 00:27:42.156 "name": "Nvme1", 00:27:42.156 "trtype": "tcp", 00:27:42.156 "traddr": "10.0.0.2", 00:27:42.156 "adrfam": "ipv4", 00:27:42.156 "trsvcid": "4420", 00:27:42.156 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:42.156 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:42.156 "hdgst": false, 00:27:42.156 "ddgst": false 00:27:42.156 }, 00:27:42.156 "method": "bdev_nvme_attach_controller" 00:27:42.156 }' 00:27:42.415 [2024-07-24 22:15:21.393475] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:27:42.415 [2024-07-24 22:15:21.393528] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2846137 ] 00:27:42.415 EAL: No free 2048 kB hugepages reported on node 1 00:27:42.415 [2024-07-24 22:15:21.464021] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:42.415 [2024-07-24 22:15:21.532880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.674 Running I/O for 1 seconds... 00:27:43.609 00:27:43.610 Latency(us) 00:27:43.610 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:43.610 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:43.610 Verification LBA range: start 0x0 length 0x4000 00:27:43.610 Nvme1n1 : 1.05 11159.34 43.59 0.00 0.00 11170.52 1966.08 47815.07 00:27:43.610 =================================================================================================================== 00:27:43.610 Total : 11159.34 43.59 0.00 0.00 11170.52 1966.08 47815.07 00:27:43.868 22:15:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2846407 00:27:43.868 22:15:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:27:43.868 22:15:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:27:43.868 22:15:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:27:43.868 22:15:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:27:43.868 22:15:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:27:43.868 22:15:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:43.868 22:15:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:43.868 { 00:27:43.868 "params": { 00:27:43.868 "name": "Nvme$subsystem", 00:27:43.868 "trtype": "$TEST_TRANSPORT", 00:27:43.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:43.868 "adrfam": "ipv4", 00:27:43.868 "trsvcid": "$NVMF_PORT", 00:27:43.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:43.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:43.868 "hdgst": ${hdgst:-false}, 00:27:43.868 "ddgst": ${ddgst:-false} 00:27:43.868 }, 00:27:43.868 "method": "bdev_nvme_attach_controller" 00:27:43.868 } 00:27:43.868 EOF 00:27:43.869 )") 00:27:43.869 22:15:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:27:43.869 22:15:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:27:43.869 22:15:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:27:43.869 22:15:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:43.869 "params": { 00:27:43.869 "name": "Nvme1", 00:27:43.869 "trtype": "tcp", 00:27:43.869 "traddr": "10.0.0.2", 00:27:43.869 "adrfam": "ipv4", 00:27:43.869 "trsvcid": "4420", 00:27:43.869 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:43.869 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:43.869 "hdgst": false, 00:27:43.869 "ddgst": false 00:27:43.869 }, 00:27:43.869 "method": "bdev_nvme_attach_controller" 00:27:43.869 }' 00:27:43.869 [2024-07-24 22:15:23.010816] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:27:43.869 [2024-07-24 22:15:23.010871] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2846407 ] 00:27:43.869 EAL: No free 2048 kB hugepages reported on node 1 00:27:43.869 [2024-07-24 22:15:23.081346] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.127 [2024-07-24 22:15:23.145727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:44.384 Running I/O for 15 seconds... 00:27:46.925 22:15:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2845863 00:27:46.925 22:15:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:27:46.925 [2024-07-24 22:15:25.980300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:115104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.925 [2024-07-24 22:15:25.980343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.925 [2024-07-24 22:15:25.980363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:114408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.925 [2024-07-24 22:15:25.980373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.925 [2024-07-24 22:15:25.980385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:114416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.925 [2024-07-24 22:15:25.980395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.925 [2024-07-24 22:15:25.980411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:114424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.925 [2024-07-24 22:15:25.980422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.925 [2024-07-24 22:15:25.980434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:114432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.925 [2024-07-24 22:15:25.980444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.925 [2024-07-24 22:15:25.980456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:114440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.925 [2024-07-24 22:15:25.980467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.925 [2024-07-24 22:15:25.980479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:114448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.925 [2024-07-24 22:15:25.980489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.925 [2024-07-24 22:15:25.980501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:114456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.925 [2024-07-24 22:15:25.980511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.925 [2024-07-24 22:15:25.980522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:114464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.925 [2024-07-24 22:15:25.980532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.925 [2024-07-24 22:15:25.980546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:114472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.925 [2024-07-24 22:15:25.980558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.925 [2024-07-24 22:15:25.980571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:114480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.925 [2024-07-24 22:15:25.980582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.925 [2024-07-24 22:15:25.980595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:114488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.925 [2024-07-24 22:15:25.980608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.925 [2024-07-24 22:15:25.980622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:114496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.925 [2024-07-24 22:15:25.980633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.925 [2024-07-24 22:15:25.980646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:114504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.925 [2024-07-24 22:15:25.980657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.925 [2024-07-24 22:15:25.980669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:114512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.925 [2024-07-24 22:15:25.980678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.925 [2024-07-24 22:15:25.980689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.925 [2024-07-24 22:15:25.980700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.925 [2024-07-24 22:15:25.980713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:114528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.925 [2024-07-24 22:15:25.980865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.925 [2024-07-24 22:15:25.980879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:114536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.925 [2024-07-24 22:15:25.980888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.925 [2024-07-24 22:15:25.980899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:114544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.925 [2024-07-24 22:15:25.980908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.925 [2024-07-24 22:15:25.980918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:114552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.925 [2024-07-24 22:15:25.980927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.925 [2024-07-24 22:15:25.980938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:114560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.925 [2024-07-24 22:15:25.980948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.925 [2024-07-24 22:15:25.980958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.925 [2024-07-24 22:15:25.980967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.925 [2024-07-24 22:15:25.980978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:114576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.925 [2024-07-24 22:15:25.980988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.925 [2024-07-24 22:15:25.980998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:114584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.925 [2024-07-24 22:15:25.981007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.925 [2024-07-24 22:15:25.981018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:114592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.925 [2024-07-24 22:15:25.981027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.925 [2024-07-24 22:15:25.981037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:114600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.926 [2024-07-24 22:15:25.981047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.926 [2024-07-24 22:15:25.981057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:114608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.926 [2024-07-24 22:15:25.981066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.926 [2024-07-24 22:15:25.981077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:114616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.926 [2024-07-24 22:15:25.981085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.926 [2024-07-24 22:15:25.981097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:114624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.926 [2024-07-24 22:15:25.981106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.926 [2024-07-24 22:15:25.981117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.926 [2024-07-24 22:15:25.981126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.926 [2024-07-24 22:15:25.981136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:114640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.926 [2024-07-24 22:15:25.981145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.926 [2024-07-24 22:15:25.981156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:114648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.926 [2024-07-24 22:15:25.981165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.926 [2024-07-24 22:15:25.981176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:114656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.926 [2024-07-24 22:15:25.981186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.926 [2024-07-24 22:15:25.981196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:114664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.926 [2024-07-24 22:15:25.981205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.926 [2024-07-24 22:15:25.981216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:114672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.926 [2024-07-24 22:15:25.981225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.926 [2024-07-24 22:15:25.981236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:114680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.926 [2024-07-24 22:15:25.981245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.926 [2024-07-24 22:15:25.981255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:114688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.926 [2024-07-24 22:15:25.981264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.926 [2024-07-24 22:15:25.981275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:114696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.926 [2024-07-24 22:15:25.981284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.926 [2024-07-24 22:15:25.981294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:114704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.926 [2024-07-24 22:15:25.981304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.926 [2024-07-24 22:15:25.981315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:114712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.926 [2024-07-24 22:15:25.981324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.926 [2024-07-24 22:15:25.981335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:114720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.926 [2024-07-24 22:15:25.981345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.926 [2024-07-24 22:15:25.981355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:114728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.926 [2024-07-24 22:15:25.981365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.926 [2024-07-24 22:15:25.981375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:114736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.926 [2024-07-24 22:15:25.981384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.926 [2024-07-24 22:15:25.981395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:114744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.926 [2024-07-24 22:15:25.981404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.926 [2024-07-24 22:15:25.981414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:114752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.926 [2024-07-24 22:15:25.981423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.926 [2024-07-24 22:15:25.981435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:114760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.926 [2024-07-24 22:15:25.981445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.926 [2024-07-24 22:15:25.981455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:114768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.926 [2024-07-24 22:15:25.981464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.926 [2024-07-24 22:15:25.981475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:114776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.926 [2024-07-24 22:15:25.981485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.926 [2024-07-24 22:15:25.981496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:114784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.926 [2024-07-24 22:15:25.981505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.926 [2024-07-24 22:15:25.981516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:115112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.926 [2024-07-24 22:15:25.981525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.926 [2024-07-24 22:15:25.981535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:115120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.926 [2024-07-24 22:15:25.981545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.926 [2024-07-24 22:15:25.981555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:115128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.926 [2024-07-24 22:15:25.981564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.926 [2024-07-24 22:15:25.981575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:115136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.926 [2024-07-24 22:15:25.981584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.926 [2024-07-24 22:15:25.981595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:115144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.926 [2024-07-24 22:15:25.981604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.926 [2024-07-24 22:15:25.981615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:115152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.926 [2024-07-24 22:15:25.981624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.926 [2024-07-24 22:15:25.981635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:115160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.926 [2024-07-24 22:15:25.981644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.926 [2024-07-24 22:15:25.981654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:115168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.926 [2024-07-24 22:15:25.981663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.926 [2024-07-24 22:15:25.981673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:115176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.926 [2024-07-24 22:15:25.981682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.926 [2024-07-24 22:15:25.981693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:115184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.926 [2024-07-24 22:15:25.981702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.926 [2024-07-24 22:15:25.981712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:115192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.926 [2024-07-24 22:15:25.981726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.926 [2024-07-24 22:15:25.981737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:115200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.926 [2024-07-24 22:15:25.981746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.926 [2024-07-24 22:15:25.981757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:115208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.926 [2024-07-24 22:15:25.981766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.926 [2024-07-24 22:15:25.981776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:115216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.926 [2024-07-24 22:15:25.981785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.926 [2024-07-24 22:15:25.981796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:115224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.927 [2024-07-24 22:15:25.981804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.927 [2024-07-24 22:15:25.981815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:115232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.927 [2024-07-24 22:15:25.981825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.927 [2024-07-24 22:15:25.981835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:114792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.927 [2024-07-24 22:15:25.981846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.927 [2024-07-24 22:15:25.981857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:114800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.927 [2024-07-24 22:15:25.981866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.927 [2024-07-24 22:15:25.981876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:114808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.927 [2024-07-24 22:15:25.981886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.927 [2024-07-24 22:15:25.981896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:114816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.927 [2024-07-24 22:15:25.981905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.927 [2024-07-24 22:15:25.981915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:114824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.927 [2024-07-24 22:15:25.981924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.927 [2024-07-24 22:15:25.981935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:114832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.927 [2024-07-24 22:15:25.981944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.927 [2024-07-24 22:15:25.981954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:114840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.927 [2024-07-24 22:15:25.981963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.927 [2024-07-24 22:15:25.981974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:114848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.927 [2024-07-24 22:15:25.981983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.927 [2024-07-24 22:15:25.981993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:115240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.927 [2024-07-24 22:15:25.982002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.927 [2024-07-24 22:15:25.982013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:115248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.927 [2024-07-24 22:15:25.982022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.927 [2024-07-24 22:15:25.982032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:115256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.927 [2024-07-24 22:15:25.982041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.927 [2024-07-24 22:15:25.982051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:115264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.927 [2024-07-24 22:15:25.982061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.927 [2024-07-24 22:15:25.982071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:115272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.927 [2024-07-24 22:15:25.982081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.927 [2024-07-24 22:15:25.982093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:115280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.927 [2024-07-24 22:15:25.982102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.927 [2024-07-24 22:15:25.982112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:115288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.927 [2024-07-24 22:15:25.982121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.927 [2024-07-24 22:15:25.982131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:115296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.927 [2024-07-24 22:15:25.982141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.927 [2024-07-24 22:15:25.982156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:115304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.927 [2024-07-24 22:15:25.982165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.927 [2024-07-24 22:15:25.982175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:115312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.927 [2024-07-24 22:15:25.982184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.927 [2024-07-24 22:15:25.982195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:115320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.927 [2024-07-24 22:15:25.982205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.927 [2024-07-24 22:15:25.982215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:115328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.927 [2024-07-24 22:15:25.982224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.927 [2024-07-24 22:15:25.982234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:115336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.927 [2024-07-24 22:15:25.982243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.927 [2024-07-24 22:15:25.982254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:115344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.927 [2024-07-24 22:15:25.982263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.927 [2024-07-24 22:15:25.982274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:115352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.927 [2024-07-24 22:15:25.982283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.927 [2024-07-24 22:15:25.982294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:115360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.927 [2024-07-24 22:15:25.982303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.927 [2024-07-24 22:15:25.982314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:114856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.927 [2024-07-24 22:15:25.982323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.927 [2024-07-24 22:15:25.982334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:114864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.927 [2024-07-24 22:15:25.982344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.927 [2024-07-24 22:15:25.982356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:114872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.927 [2024-07-24 22:15:25.982366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.927 [2024-07-24 22:15:25.982377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:114880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.927 [2024-07-24 22:15:25.982386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.927 [2024-07-24 22:15:25.982398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:114888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.927 [2024-07-24 22:15:25.982407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.927 [2024-07-24 22:15:25.982417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.927 [2024-07-24 22:15:25.982426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.927 [2024-07-24 22:15:25.982438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:114904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.927 [2024-07-24 22:15:25.982448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.927 [2024-07-24 22:15:25.982458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:115368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.927 [2024-07-24 22:15:25.982467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.927 [2024-07-24 22:15:25.982479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:115376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.927 [2024-07-24 22:15:25.982488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.927 [2024-07-24 22:15:25.982500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:115384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.927 [2024-07-24 22:15:25.982510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.927 [2024-07-24 22:15:25.982520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:115392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.927 [2024-07-24 22:15:25.982530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.927 [2024-07-24 22:15:25.982541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:115400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.927 [2024-07-24 22:15:25.982551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.928 [2024-07-24 22:15:25.982561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:115408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.928 [2024-07-24 22:15:25.982571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.928 [2024-07-24 22:15:25.982583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:115416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.928 [2024-07-24 22:15:25.982592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.928 [2024-07-24 22:15:25.982602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:114912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.928 [2024-07-24 22:15:25.982613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.928 [2024-07-24 22:15:25.982625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:114920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.928 [2024-07-24 22:15:25.982634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.928 [2024-07-24 22:15:25.982645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:114928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.928 [2024-07-24 22:15:25.982655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.928 [2024-07-24 22:15:25.982667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:114936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.928 [2024-07-24 22:15:25.982677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.928 [2024-07-24 22:15:25.982688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:114944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.928 [2024-07-24 22:15:25.982698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.928 [2024-07-24 22:15:25.982708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:114952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.928 [2024-07-24 22:15:25.982722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.928 [2024-07-24 22:15:25.982733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:114960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.928 [2024-07-24 22:15:25.982742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.928 [2024-07-24 22:15:25.982753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:114968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.928 [2024-07-24 22:15:25.982762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.928 [2024-07-24 22:15:25.982773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:115424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.928 [2024-07-24 22:15:25.982783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.928 [2024-07-24 22:15:25.982794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:114976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.928 [2024-07-24 22:15:25.982803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.928 [2024-07-24 22:15:25.982814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:114984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.928 [2024-07-24 22:15:25.982824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.928 [2024-07-24 22:15:25.982835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:114992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.928 [2024-07-24 22:15:25.982846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.928 [2024-07-24 22:15:25.982857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:115000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.928 [2024-07-24 22:15:25.982866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.928 [2024-07-24 22:15:25.982879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:115008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.928 [2024-07-24 22:15:25.982890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.928 [2024-07-24 22:15:25.982901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:115016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.928 [2024-07-24 22:15:25.982910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.928 [2024-07-24 22:15:25.982920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:115024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.928 [2024-07-24 22:15:25.982930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.928 [2024-07-24 22:15:25.982942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:115032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.928 [2024-07-24 22:15:25.982952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.928 [2024-07-24 22:15:25.982963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:115040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.928 [2024-07-24 22:15:25.982973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.928 [2024-07-24 22:15:25.982983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:115048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.928 [2024-07-24 22:15:25.982992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.928 [2024-07-24 22:15:25.983003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:115056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.928 [2024-07-24 22:15:25.983013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.928 [2024-07-24 22:15:25.983023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:115064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.928 [2024-07-24 22:15:25.983032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.928 [2024-07-24 22:15:25.983044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:115072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.928 [2024-07-24 22:15:25.983053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.928 [2024-07-24 22:15:25.983064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:115080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.928 [2024-07-24 22:15:25.983074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.928 [2024-07-24 22:15:25.983084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:115088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.928 [2024-07-24 22:15:25.983094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.928 [2024-07-24 22:15:25.983104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac08c0 is same with the state(5) to be set 00:27:46.928 [2024-07-24 22:15:25.983115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:46.928 [2024-07-24 22:15:25.983123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:46.928 [2024-07-24 22:15:25.983131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115096 len:8 PRP1 0x0 PRP2 0x0 00:27:46.928 [2024-07-24 22:15:25.983143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.928 [2024-07-24 22:15:25.983190] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xac08c0 was disconnected and freed. reset controller. 00:27:46.928 [2024-07-24 22:15:25.985900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.928 [2024-07-24 22:15:25.985954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:46.928 [2024-07-24 22:15:25.986583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-07-24 22:15:25.986634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:46.928 [2024-07-24 22:15:25.986667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:46.928 [2024-07-24 22:15:25.987273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:46.928 [2024-07-24 22:15:25.987445] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.928 [2024-07-24 22:15:25.987457] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.928 [2024-07-24 22:15:25.987468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.928 [2024-07-24 22:15:25.990138] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.928 [2024-07-24 22:15:25.999138] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.928 [2024-07-24 22:15:25.999588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-07-24 22:15:25.999609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:46.928 [2024-07-24 22:15:25.999620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:46.928 [2024-07-24 22:15:25.999799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:46.928 [2024-07-24 22:15:25.999980] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.929 [2024-07-24 22:15:25.999991] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.929 [2024-07-24 22:15:26.000001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.929 [2024-07-24 22:15:26.002599] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.929 [2024-07-24 22:15:26.011868] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.929 [2024-07-24 22:15:26.012150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-07-24 22:15:26.012169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:46.929 [2024-07-24 22:15:26.012179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:46.929 [2024-07-24 22:15:26.012337] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:46.929 [2024-07-24 22:15:26.012494] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.929 [2024-07-24 22:15:26.012506] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.929 [2024-07-24 22:15:26.012514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.929 [2024-07-24 22:15:26.015070] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.929 [2024-07-24 22:15:26.024589] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.929 [2024-07-24 22:15:26.025010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-07-24 22:15:26.025029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:46.929 [2024-07-24 22:15:26.025039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:46.929 [2024-07-24 22:15:26.025196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:46.929 [2024-07-24 22:15:26.025353] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.929 [2024-07-24 22:15:26.025364] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.929 [2024-07-24 22:15:26.025373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.929 [2024-07-24 22:15:26.027919] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.929 [2024-07-24 22:15:26.037498] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.929 [2024-07-24 22:15:26.037976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-07-24 22:15:26.037995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:46.929 [2024-07-24 22:15:26.038006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:46.929 [2024-07-24 22:15:26.038175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:46.929 [2024-07-24 22:15:26.038346] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.929 [2024-07-24 22:15:26.038358] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.929 [2024-07-24 22:15:26.038367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.929 [2024-07-24 22:15:26.040991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.929 [2024-07-24 22:15:26.050342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.929 [2024-07-24 22:15:26.050871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-07-24 22:15:26.050925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:46.929 [2024-07-24 22:15:26.050957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:46.929 [2024-07-24 22:15:26.051449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:46.929 [2024-07-24 22:15:26.051617] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.929 [2024-07-24 22:15:26.051628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.929 [2024-07-24 22:15:26.051637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.929 [2024-07-24 22:15:26.054228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.929 [2024-07-24 22:15:26.063154] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.929 [2024-07-24 22:15:26.063644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-07-24 22:15:26.063697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:46.929 [2024-07-24 22:15:26.063752] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:46.929 [2024-07-24 22:15:26.064173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:46.929 [2024-07-24 22:15:26.064413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.929 [2024-07-24 22:15:26.064429] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.929 [2024-07-24 22:15:26.064442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.929 [2024-07-24 22:15:26.068182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.929 [2024-07-24 22:15:26.076220] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.929 [2024-07-24 22:15:26.076725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-07-24 22:15:26.076744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:46.929 [2024-07-24 22:15:26.076753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:46.929 [2024-07-24 22:15:26.076910] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:46.929 [2024-07-24 22:15:26.077069] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.929 [2024-07-24 22:15:26.077080] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.929 [2024-07-24 22:15:26.077088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.929 [2024-07-24 22:15:26.079625] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.929 [2024-07-24 22:15:26.088945] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.929 [2024-07-24 22:15:26.089418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-07-24 22:15:26.089437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:46.929 [2024-07-24 22:15:26.089446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:46.929 [2024-07-24 22:15:26.089603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:46.929 [2024-07-24 22:15:26.089782] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.929 [2024-07-24 22:15:26.089794] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.929 [2024-07-24 22:15:26.089803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.929 [2024-07-24 22:15:26.092326] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.929 [2024-07-24 22:15:26.101635] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.929 [2024-07-24 22:15:26.102090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-07-24 22:15:26.102109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:46.929 [2024-07-24 22:15:26.102118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:46.929 [2024-07-24 22:15:26.102283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:46.929 [2024-07-24 22:15:26.102449] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.929 [2024-07-24 22:15:26.102463] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.929 [2024-07-24 22:15:26.102473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.929 [2024-07-24 22:15:26.104963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.929 [2024-07-24 22:15:26.114471] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.929 [2024-07-24 22:15:26.114958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-07-24 22:15:26.114976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:46.929 [2024-07-24 22:15:26.114986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:46.929 [2024-07-24 22:15:26.115142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:46.929 [2024-07-24 22:15:26.115300] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.929 [2024-07-24 22:15:26.115311] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.929 [2024-07-24 22:15:26.115319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.929 [2024-07-24 22:15:26.117813] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.930 [2024-07-24 22:15:26.127329] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.930 [2024-07-24 22:15:26.127764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.930 [2024-07-24 22:15:26.127782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:46.930 [2024-07-24 22:15:26.127792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:46.930 [2024-07-24 22:15:26.127958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:46.930 [2024-07-24 22:15:26.128123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.930 [2024-07-24 22:15:26.128135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.930 [2024-07-24 22:15:26.128144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.930 [2024-07-24 22:15:26.130738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.238 [2024-07-24 22:15:26.140213] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.238 [2024-07-24 22:15:26.140738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.238 [2024-07-24 22:15:26.140791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.238 [2024-07-24 22:15:26.140824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.238 [2024-07-24 22:15:26.141413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.238 [2024-07-24 22:15:26.141737] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.238 [2024-07-24 22:15:26.141750] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.238 [2024-07-24 22:15:26.141760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.238 [2024-07-24 22:15:26.144423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.238 [2024-07-24 22:15:26.153084] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.238 [2024-07-24 22:15:26.153579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.238 [2024-07-24 22:15:26.153609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.238 [2024-07-24 22:15:26.153619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.238 [2024-07-24 22:15:26.153805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.238 [2024-07-24 22:15:26.153978] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.238 [2024-07-24 22:15:26.153990] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.238 [2024-07-24 22:15:26.153999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.238 [2024-07-24 22:15:26.156629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.238 [2024-07-24 22:15:26.165998] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.238 [2024-07-24 22:15:26.166505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.238 [2024-07-24 22:15:26.166559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.238 [2024-07-24 22:15:26.166591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.238 [2024-07-24 22:15:26.167199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.238 [2024-07-24 22:15:26.167684] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.238 [2024-07-24 22:15:26.167696] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.238 [2024-07-24 22:15:26.167705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.238 [2024-07-24 22:15:26.170334] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.238 [2024-07-24 22:15:26.178726] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.238 [2024-07-24 22:15:26.179182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.238 [2024-07-24 22:15:26.179200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.238 [2024-07-24 22:15:26.179210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.238 [2024-07-24 22:15:26.179375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.238 [2024-07-24 22:15:26.179542] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.238 [2024-07-24 22:15:26.179553] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.238 [2024-07-24 22:15:26.179562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.238 [2024-07-24 22:15:26.182051] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.238 [2024-07-24 22:15:26.191572] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.238 [2024-07-24 22:15:26.192075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.238 [2024-07-24 22:15:26.192129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.238 [2024-07-24 22:15:26.192162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.238 [2024-07-24 22:15:26.192608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.238 [2024-07-24 22:15:26.192771] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.238 [2024-07-24 22:15:26.192782] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.238 [2024-07-24 22:15:26.192792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.238 [2024-07-24 22:15:26.195287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.238 [2024-07-24 22:15:26.204304] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.238 [2024-07-24 22:15:26.204801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.238 [2024-07-24 22:15:26.204819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.238 [2024-07-24 22:15:26.204829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.238 [2024-07-24 22:15:26.204995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.239 [2024-07-24 22:15:26.205159] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.239 [2024-07-24 22:15:26.205170] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.239 [2024-07-24 22:15:26.205179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.239 [2024-07-24 22:15:26.207782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.239 [2024-07-24 22:15:26.217074] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.239 [2024-07-24 22:15:26.217504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.239 [2024-07-24 22:15:26.217557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.239 [2024-07-24 22:15:26.217589] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.239 [2024-07-24 22:15:26.218073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.239 [2024-07-24 22:15:26.218232] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.239 [2024-07-24 22:15:26.218243] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.239 [2024-07-24 22:15:26.218253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.239 [2024-07-24 22:15:26.220851] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.239 [2024-07-24 22:15:26.229853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.239 [2024-07-24 22:15:26.230367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.239 [2024-07-24 22:15:26.230420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.239 [2024-07-24 22:15:26.230453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.239 [2024-07-24 22:15:26.230847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.239 [2024-07-24 22:15:26.231019] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.239 [2024-07-24 22:15:26.231034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.239 [2024-07-24 22:15:26.231048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.239 [2024-07-24 22:15:26.233774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.239 [2024-07-24 22:15:26.242789] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.239 [2024-07-24 22:15:26.243295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.239 [2024-07-24 22:15:26.243317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.239 [2024-07-24 22:15:26.243329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.239 [2024-07-24 22:15:26.243504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.239 [2024-07-24 22:15:26.243681] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.239 [2024-07-24 22:15:26.243696] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.239 [2024-07-24 22:15:26.243708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.239 [2024-07-24 22:15:26.246430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.239 [2024-07-24 22:15:26.255755] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.239 [2024-07-24 22:15:26.256253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.239 [2024-07-24 22:15:26.256273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.239 [2024-07-24 22:15:26.256285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.239 [2024-07-24 22:15:26.256445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.239 [2024-07-24 22:15:26.256604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.239 [2024-07-24 22:15:26.256617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.239 [2024-07-24 22:15:26.256627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.239 [2024-07-24 22:15:26.259187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.239 [2024-07-24 22:15:26.268474] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.239 [2024-07-24 22:15:26.268952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.239 [2024-07-24 22:15:26.268970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.239 [2024-07-24 22:15:26.268979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.239 [2024-07-24 22:15:26.269135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.239 [2024-07-24 22:15:26.269293] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.239 [2024-07-24 22:15:26.269303] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.239 [2024-07-24 22:15:26.269312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.239 [2024-07-24 22:15:26.271863] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.239 [2024-07-24 22:15:26.281170] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.239 [2024-07-24 22:15:26.281668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.239 [2024-07-24 22:15:26.281689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.239 [2024-07-24 22:15:26.281698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.239 [2024-07-24 22:15:26.281885] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.239 [2024-07-24 22:15:26.282052] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.239 [2024-07-24 22:15:26.282063] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.239 [2024-07-24 22:15:26.282072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.239 [2024-07-24 22:15:26.284580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.239 [2024-07-24 22:15:26.293950] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.239 [2024-07-24 22:15:26.294463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.239 [2024-07-24 22:15:26.294515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.239 [2024-07-24 22:15:26.294547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.239 [2024-07-24 22:15:26.294991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.239 [2024-07-24 22:15:26.295159] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.239 [2024-07-24 22:15:26.295171] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.239 [2024-07-24 22:15:26.295180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.239 [2024-07-24 22:15:26.297683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.239 [2024-07-24 22:15:26.306613] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.239 [2024-07-24 22:15:26.307045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.239 [2024-07-24 22:15:26.307063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.239 [2024-07-24 22:15:26.307073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.239 [2024-07-24 22:15:26.307229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.239 [2024-07-24 22:15:26.307386] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.239 [2024-07-24 22:15:26.307397] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.240 [2024-07-24 22:15:26.307405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.240 [2024-07-24 22:15:26.309948] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.240 [2024-07-24 22:15:26.319315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.240 [2024-07-24 22:15:26.319823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.240 [2024-07-24 22:15:26.319877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.240 [2024-07-24 22:15:26.319910] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.240 [2024-07-24 22:15:26.320327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.240 [2024-07-24 22:15:26.320487] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.240 [2024-07-24 22:15:26.320498] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.240 [2024-07-24 22:15:26.320507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.240 [2024-07-24 22:15:26.322970] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.240 [2024-07-24 22:15:26.331964] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.240 [2024-07-24 22:15:26.332490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.240 [2024-07-24 22:15:26.332541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.240 [2024-07-24 22:15:26.332573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.240 [2024-07-24 22:15:26.333048] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.240 [2024-07-24 22:15:26.333215] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.240 [2024-07-24 22:15:26.333227] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.240 [2024-07-24 22:15:26.333236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.240 [2024-07-24 22:15:26.335738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.240 [2024-07-24 22:15:26.344653] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.240 [2024-07-24 22:15:26.345185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.240 [2024-07-24 22:15:26.345238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.240 [2024-07-24 22:15:26.345270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.240 [2024-07-24 22:15:26.345873] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.240 [2024-07-24 22:15:26.346314] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.240 [2024-07-24 22:15:26.346326] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.240 [2024-07-24 22:15:26.346335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.240 [2024-07-24 22:15:26.348832] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.240 [2024-07-24 22:15:26.357471] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.240 [2024-07-24 22:15:26.357934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.240 [2024-07-24 22:15:26.357952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.240 [2024-07-24 22:15:26.357962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.240 [2024-07-24 22:15:26.358118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.240 [2024-07-24 22:15:26.358276] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.240 [2024-07-24 22:15:26.358286] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.240 [2024-07-24 22:15:26.358295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.240 [2024-07-24 22:15:26.360855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.240 [2024-07-24 22:15:26.370167] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.240 [2024-07-24 22:15:26.370590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.240 [2024-07-24 22:15:26.370608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.240 [2024-07-24 22:15:26.370617] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.240 [2024-07-24 22:15:26.370796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.240 [2024-07-24 22:15:26.370962] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.240 [2024-07-24 22:15:26.370974] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.240 [2024-07-24 22:15:26.370986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.240 [2024-07-24 22:15:26.373499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.240 [2024-07-24 22:15:26.383019] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.240 [2024-07-24 22:15:26.383524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.240 [2024-07-24 22:15:26.383577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.240 [2024-07-24 22:15:26.383609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.240 [2024-07-24 22:15:26.384211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.240 [2024-07-24 22:15:26.384477] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.240 [2024-07-24 22:15:26.384488] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.240 [2024-07-24 22:15:26.384496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.240 [2024-07-24 22:15:26.387037] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.240 [2024-07-24 22:15:26.395766] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.240 [2024-07-24 22:15:26.396211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.240 [2024-07-24 22:15:26.396262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.240 [2024-07-24 22:15:26.396294] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.240 [2024-07-24 22:15:26.396898] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.240 [2024-07-24 22:15:26.397447] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.240 [2024-07-24 22:15:26.397458] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.240 [2024-07-24 22:15:26.397467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.240 [2024-07-24 22:15:26.400016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.240 [2024-07-24 22:15:26.408513] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.240 [2024-07-24 22:15:26.409008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.240 [2024-07-24 22:15:26.409027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.240 [2024-07-24 22:15:26.409039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.240 [2024-07-24 22:15:26.409196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.240 [2024-07-24 22:15:26.409353] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.240 [2024-07-24 22:15:26.409364] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.240 [2024-07-24 22:15:26.409372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.240 [2024-07-24 22:15:26.412019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.240 [2024-07-24 22:15:26.421516] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.240 [2024-07-24 22:15:26.422036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.240 [2024-07-24 22:15:26.422055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.240 [2024-07-24 22:15:26.422065] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.240 [2024-07-24 22:15:26.422235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.240 [2024-07-24 22:15:26.422406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.240 [2024-07-24 22:15:26.422417] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.240 [2024-07-24 22:15:26.422427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.240 [2024-07-24 22:15:26.425103] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.240 [2024-07-24 22:15:26.434229] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.240 [2024-07-24 22:15:26.434744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.240 [2024-07-24 22:15:26.434798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.240 [2024-07-24 22:15:26.434830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.240 [2024-07-24 22:15:26.435339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.240 [2024-07-24 22:15:26.435497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.240 [2024-07-24 22:15:26.435508] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.240 [2024-07-24 22:15:26.435516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.241 [2024-07-24 22:15:26.438113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.241 [2024-07-24 22:15:26.446954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.241 [2024-07-24 22:15:26.447490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.241 [2024-07-24 22:15:26.447542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.241 [2024-07-24 22:15:26.447575] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.241 [2024-07-24 22:15:26.448180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.241 [2024-07-24 22:15:26.448607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.241 [2024-07-24 22:15:26.448626] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.241 [2024-07-24 22:15:26.448639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.502 [2024-07-24 22:15:26.452375] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.503 [2024-07-24 22:15:26.460373] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.503 [2024-07-24 22:15:26.460872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.503 [2024-07-24 22:15:26.460889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.503 [2024-07-24 22:15:26.460898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.503 [2024-07-24 22:15:26.461055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.503 [2024-07-24 22:15:26.461212] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.503 [2024-07-24 22:15:26.461222] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.503 [2024-07-24 22:15:26.461230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.503 [2024-07-24 22:15:26.463777] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.503 [2024-07-24 22:15:26.473141] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.503 [2024-07-24 22:15:26.473636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.503 [2024-07-24 22:15:26.473654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.503 [2024-07-24 22:15:26.473663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.503 [2024-07-24 22:15:26.473845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.503 [2024-07-24 22:15:26.474011] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.503 [2024-07-24 22:15:26.474023] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.503 [2024-07-24 22:15:26.474032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.503 [2024-07-24 22:15:26.476540] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.503 [2024-07-24 22:15:26.485903] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.503 [2024-07-24 22:15:26.486433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.503 [2024-07-24 22:15:26.486454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.503 [2024-07-24 22:15:26.486465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.503 [2024-07-24 22:15:26.486634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.503 [2024-07-24 22:15:26.486815] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.503 [2024-07-24 22:15:26.486829] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.503 [2024-07-24 22:15:26.486838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.503 [2024-07-24 22:15:26.489551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.503 [2024-07-24 22:15:26.498676] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.503 [2024-07-24 22:15:26.499130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.503 [2024-07-24 22:15:26.499149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.503 [2024-07-24 22:15:26.499159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.503 [2024-07-24 22:15:26.499324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.503 [2024-07-24 22:15:26.499490] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.503 [2024-07-24 22:15:26.499502] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.503 [2024-07-24 22:15:26.499510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.503 [2024-07-24 22:15:26.501998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.503 [2024-07-24 22:15:26.511516] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.503 [2024-07-24 22:15:26.512023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.503 [2024-07-24 22:15:26.512041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.503 [2024-07-24 22:15:26.512051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.503 [2024-07-24 22:15:26.512208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.503 [2024-07-24 22:15:26.512365] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.503 [2024-07-24 22:15:26.512376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.503 [2024-07-24 22:15:26.512385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.503 [2024-07-24 22:15:26.514935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.503 [2024-07-24 22:15:26.524168] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.503 [2024-07-24 22:15:26.524688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.503 [2024-07-24 22:15:26.524753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.503 [2024-07-24 22:15:26.524786] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.503 [2024-07-24 22:15:26.525282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.503 [2024-07-24 22:15:26.525441] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.503 [2024-07-24 22:15:26.525452] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.503 [2024-07-24 22:15:26.525462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.503 [2024-07-24 22:15:26.528008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.503 [2024-07-24 22:15:26.536855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.503 [2024-07-24 22:15:26.537349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.503 [2024-07-24 22:15:26.537366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.503 [2024-07-24 22:15:26.537375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.503 [2024-07-24 22:15:26.537535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.503 [2024-07-24 22:15:26.537692] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.503 [2024-07-24 22:15:26.537703] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.503 [2024-07-24 22:15:26.537712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.503 [2024-07-24 22:15:26.540204] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.503 [2024-07-24 22:15:26.549563] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.503 [2024-07-24 22:15:26.550067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.503 [2024-07-24 22:15:26.550085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.503 [2024-07-24 22:15:26.550094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.503 [2024-07-24 22:15:26.550251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.503 [2024-07-24 22:15:26.550409] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.503 [2024-07-24 22:15:26.550421] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.503 [2024-07-24 22:15:26.550429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.503 [2024-07-24 22:15:26.552975] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.503 [2024-07-24 22:15:26.562348] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.503 [2024-07-24 22:15:26.562858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.503 [2024-07-24 22:15:26.562911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.503 [2024-07-24 22:15:26.562943] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.503 [2024-07-24 22:15:26.563425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.503 [2024-07-24 22:15:26.563583] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.503 [2024-07-24 22:15:26.563594] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.503 [2024-07-24 22:15:26.563602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.503 [2024-07-24 22:15:26.566149] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.503 [2024-07-24 22:15:26.575082] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.503 [2024-07-24 22:15:26.575569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.503 [2024-07-24 22:15:26.575587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.503 [2024-07-24 22:15:26.575597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.503 [2024-07-24 22:15:26.575759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.503 [2024-07-24 22:15:26.575941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.503 [2024-07-24 22:15:26.575952] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.503 [2024-07-24 22:15:26.575968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.503 [2024-07-24 22:15:26.578484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.504 [2024-07-24 22:15:26.587851] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.504 [2024-07-24 22:15:26.588344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.504 [2024-07-24 22:15:26.588361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.504 [2024-07-24 22:15:26.588370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.504 [2024-07-24 22:15:26.588526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.504 [2024-07-24 22:15:26.588683] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.504 [2024-07-24 22:15:26.588694] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.504 [2024-07-24 22:15:26.588702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.504 [2024-07-24 22:15:26.591252] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.504 [2024-07-24 22:15:26.600619] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.504 [2024-07-24 22:15:26.601061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.504 [2024-07-24 22:15:26.601079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.504 [2024-07-24 22:15:26.601088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.504 [2024-07-24 22:15:26.601244] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.504 [2024-07-24 22:15:26.601402] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.504 [2024-07-24 22:15:26.601413] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.504 [2024-07-24 22:15:26.601421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.504 [2024-07-24 22:15:26.603958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.504 [2024-07-24 22:15:26.613377] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.504 [2024-07-24 22:15:26.613889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.504 [2024-07-24 22:15:26.613942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.504 [2024-07-24 22:15:26.613974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.504 [2024-07-24 22:15:26.614495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.504 [2024-07-24 22:15:26.614652] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.504 [2024-07-24 22:15:26.614663] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.504 [2024-07-24 22:15:26.614672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.504 [2024-07-24 22:15:26.617219] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.504 [2024-07-24 22:15:26.626144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.504 [2024-07-24 22:15:26.626552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.504 [2024-07-24 22:15:26.626573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.504 [2024-07-24 22:15:26.626582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.504 [2024-07-24 22:15:26.626762] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.504 [2024-07-24 22:15:26.626929] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.504 [2024-07-24 22:15:26.626941] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.504 [2024-07-24 22:15:26.626949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.504 [2024-07-24 22:15:26.629464] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.504 [2024-07-24 22:15:26.638919] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.504 [2024-07-24 22:15:26.639413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.504 [2024-07-24 22:15:26.639430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.504 [2024-07-24 22:15:26.639439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.504 [2024-07-24 22:15:26.639596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.504 [2024-07-24 22:15:26.639760] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.504 [2024-07-24 22:15:26.639787] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.504 [2024-07-24 22:15:26.639797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.504 [2024-07-24 22:15:26.642321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.504 [2024-07-24 22:15:26.651721] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.504 [2024-07-24 22:15:26.652212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.504 [2024-07-24 22:15:26.652267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.504 [2024-07-24 22:15:26.652301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.504 [2024-07-24 22:15:26.652676] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.504 [2024-07-24 22:15:26.652863] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.504 [2024-07-24 22:15:26.652875] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.504 [2024-07-24 22:15:26.652884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.504 [2024-07-24 22:15:26.655400] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.504 [2024-07-24 22:15:26.664489] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.504 [2024-07-24 22:15:26.664994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.504 [2024-07-24 22:15:26.665048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.504 [2024-07-24 22:15:26.665080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.504 [2024-07-24 22:15:26.665619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.504 [2024-07-24 22:15:26.665804] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.504 [2024-07-24 22:15:26.665816] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.504 [2024-07-24 22:15:26.665826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.504 [2024-07-24 22:15:26.668344] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.504 [2024-07-24 22:15:26.677225] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.504 [2024-07-24 22:15:26.677734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.504 [2024-07-24 22:15:26.677788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.504 [2024-07-24 22:15:26.677820] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.504 [2024-07-24 22:15:26.678194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.504 [2024-07-24 22:15:26.678352] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.504 [2024-07-24 22:15:26.678363] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.504 [2024-07-24 22:15:26.678372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.504 [2024-07-24 22:15:26.680916] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.504 [2024-07-24 22:15:26.689993] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.504 [2024-07-24 22:15:26.690485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.504 [2024-07-24 22:15:26.690502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.504 [2024-07-24 22:15:26.690511] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.504 [2024-07-24 22:15:26.690667] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.504 [2024-07-24 22:15:26.690852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.504 [2024-07-24 22:15:26.690865] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.504 [2024-07-24 22:15:26.690874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.504 [2024-07-24 22:15:26.693392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.504 [2024-07-24 22:15:26.702754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.504 [2024-07-24 22:15:26.703261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.504 [2024-07-24 22:15:26.703313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.504 [2024-07-24 22:15:26.703345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.504 [2024-07-24 22:15:26.703953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.504 [2024-07-24 22:15:26.704299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.504 [2024-07-24 22:15:26.704311] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.504 [2024-07-24 22:15:26.704320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.504 [2024-07-24 22:15:26.706819] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.765 [2024-07-24 22:15:26.715671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.765 [2024-07-24 22:15:26.716221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.765 [2024-07-24 22:15:26.716273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.765 [2024-07-24 22:15:26.716306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.765 [2024-07-24 22:15:26.716774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.765 [2024-07-24 22:15:26.716941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.765 [2024-07-24 22:15:26.716953] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.765 [2024-07-24 22:15:26.716962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.765 [2024-07-24 22:15:26.719515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.765 [2024-07-24 22:15:26.728460] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.766 [2024-07-24 22:15:26.728965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.766 [2024-07-24 22:15:26.729018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.766 [2024-07-24 22:15:26.729049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.766 [2024-07-24 22:15:26.729637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.766 [2024-07-24 22:15:26.730126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.766 [2024-07-24 22:15:26.730138] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.766 [2024-07-24 22:15:26.730147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.766 [2024-07-24 22:15:26.732652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.766 [2024-07-24 22:15:26.741200] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.766 [2024-07-24 22:15:26.741700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.766 [2024-07-24 22:15:26.741726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.766 [2024-07-24 22:15:26.741739] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.766 [2024-07-24 22:15:26.741943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.766 [2024-07-24 22:15:26.742119] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.766 [2024-07-24 22:15:26.742134] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.766 [2024-07-24 22:15:26.742146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.766 [2024-07-24 22:15:26.744863] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.766 [2024-07-24 22:15:26.754071] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.766 [2024-07-24 22:15:26.754596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.766 [2024-07-24 22:15:26.754655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.766 [2024-07-24 22:15:26.754703] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.766 [2024-07-24 22:15:26.755150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.766 [2024-07-24 22:15:26.755309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.766 [2024-07-24 22:15:26.755320] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.766 [2024-07-24 22:15:26.755330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.766 [2024-07-24 22:15:26.757814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.766 [2024-07-24 22:15:26.766921] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.766 [2024-07-24 22:15:26.767418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.766 [2024-07-24 22:15:26.767436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.766 [2024-07-24 22:15:26.767445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.766 [2024-07-24 22:15:26.767601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.766 [2024-07-24 22:15:26.767782] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.766 [2024-07-24 22:15:26.767793] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.766 [2024-07-24 22:15:26.767802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.766 [2024-07-24 22:15:26.770327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.766 [2024-07-24 22:15:26.779685] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.766 [2024-07-24 22:15:26.780197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.766 [2024-07-24 22:15:26.780250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.766 [2024-07-24 22:15:26.780282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.766 [2024-07-24 22:15:26.780870] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.766 [2024-07-24 22:15:26.781038] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.766 [2024-07-24 22:15:26.781050] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.766 [2024-07-24 22:15:26.781059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.766 [2024-07-24 22:15:26.783567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.766 [2024-07-24 22:15:26.792339] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.766 [2024-07-24 22:15:26.792814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.766 [2024-07-24 22:15:26.792832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.766 [2024-07-24 22:15:26.792842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.766 [2024-07-24 22:15:26.792999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.766 [2024-07-24 22:15:26.793156] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.766 [2024-07-24 22:15:26.793170] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.766 [2024-07-24 22:15:26.793178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.766 [2024-07-24 22:15:26.795725] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.766 [2024-07-24 22:15:26.805088] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.766 [2024-07-24 22:15:26.805566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.766 [2024-07-24 22:15:26.805612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.766 [2024-07-24 22:15:26.805644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.766 [2024-07-24 22:15:26.806161] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.766 [2024-07-24 22:15:26.806329] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.766 [2024-07-24 22:15:26.806340] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.766 [2024-07-24 22:15:26.806350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.766 [2024-07-24 22:15:26.808847] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.766 [2024-07-24 22:15:26.817845] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.766 [2024-07-24 22:15:26.818352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.766 [2024-07-24 22:15:26.818405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.766 [2024-07-24 22:15:26.818437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.766 [2024-07-24 22:15:26.819028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.766 [2024-07-24 22:15:26.819195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.766 [2024-07-24 22:15:26.819206] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.766 [2024-07-24 22:15:26.819215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.766 [2024-07-24 22:15:26.821710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.766 [2024-07-24 22:15:26.830485] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.766 [2024-07-24 22:15:26.830884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.766 [2024-07-24 22:15:26.830903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.766 [2024-07-24 22:15:26.830912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.766 [2024-07-24 22:15:26.831070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.766 [2024-07-24 22:15:26.831226] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.766 [2024-07-24 22:15:26.831237] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.766 [2024-07-24 22:15:26.831245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.767 [2024-07-24 22:15:26.833789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.767 [2024-07-24 22:15:26.843158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.767 [2024-07-24 22:15:26.843657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.767 [2024-07-24 22:15:26.843703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.767 [2024-07-24 22:15:26.843751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.767 [2024-07-24 22:15:26.844276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.767 [2024-07-24 22:15:26.844442] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.767 [2024-07-24 22:15:26.844453] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.767 [2024-07-24 22:15:26.844461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.767 [2024-07-24 22:15:26.846951] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.767 [2024-07-24 22:15:26.855799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.767 [2024-07-24 22:15:26.856305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.767 [2024-07-24 22:15:26.856356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.767 [2024-07-24 22:15:26.856388] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.767 [2024-07-24 22:15:26.856992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.767 [2024-07-24 22:15:26.857397] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.767 [2024-07-24 22:15:26.857409] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.767 [2024-07-24 22:15:26.857418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.767 [2024-07-24 22:15:26.859906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.767 [2024-07-24 22:15:26.868541] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.767 [2024-07-24 22:15:26.869039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.767 [2024-07-24 22:15:26.869092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.767 [2024-07-24 22:15:26.869125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.767 [2024-07-24 22:15:26.869449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.767 [2024-07-24 22:15:26.869607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.767 [2024-07-24 22:15:26.869618] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.767 [2024-07-24 22:15:26.869627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.767 [2024-07-24 22:15:26.872204] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.767 [2024-07-24 22:15:26.881273] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.767 [2024-07-24 22:15:26.881750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.767 [2024-07-24 22:15:26.881771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.767 [2024-07-24 22:15:26.881780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.767 [2024-07-24 22:15:26.881942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.767 [2024-07-24 22:15:26.882101] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.767 [2024-07-24 22:15:26.882112] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.767 [2024-07-24 22:15:26.882122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.767 [2024-07-24 22:15:26.884666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.767 [2024-07-24 22:15:26.893950] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.767 [2024-07-24 22:15:26.894446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.767 [2024-07-24 22:15:26.894487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.767 [2024-07-24 22:15:26.894520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.767 [2024-07-24 22:15:26.895124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.767 [2024-07-24 22:15:26.895356] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.767 [2024-07-24 22:15:26.895367] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.767 [2024-07-24 22:15:26.895376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.767 [2024-07-24 22:15:26.897871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.767 [2024-07-24 22:15:26.906652] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.767 [2024-07-24 22:15:26.907167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.767 [2024-07-24 22:15:26.907219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.767 [2024-07-24 22:15:26.907251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.767 [2024-07-24 22:15:26.907561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.767 [2024-07-24 22:15:26.907725] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.767 [2024-07-24 22:15:26.907737] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.767 [2024-07-24 22:15:26.907762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.767 [2024-07-24 22:15:26.910286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.767 [2024-07-24 22:15:26.919430] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.767 [2024-07-24 22:15:26.919935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.767 [2024-07-24 22:15:26.919988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.767 [2024-07-24 22:15:26.920020] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.767 [2024-07-24 22:15:26.920442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.767 [2024-07-24 22:15:26.920601] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.767 [2024-07-24 22:15:26.920612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.767 [2024-07-24 22:15:26.920624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.767 [2024-07-24 22:15:26.923173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.767 [2024-07-24 22:15:26.932190] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.767 [2024-07-24 22:15:26.932702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.767 [2024-07-24 22:15:26.932767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.767 [2024-07-24 22:15:26.932800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.767 [2024-07-24 22:15:26.933294] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.767 [2024-07-24 22:15:26.933452] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.767 [2024-07-24 22:15:26.933463] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.767 [2024-07-24 22:15:26.933472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.767 [2024-07-24 22:15:26.936008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.767 [2024-07-24 22:15:26.944928] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.767 [2024-07-24 22:15:26.945424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.767 [2024-07-24 22:15:26.945441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.767 [2024-07-24 22:15:26.945450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.767 [2024-07-24 22:15:26.945607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.767 [2024-07-24 22:15:26.945788] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.767 [2024-07-24 22:15:26.945799] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.767 [2024-07-24 22:15:26.945808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.767 [2024-07-24 22:15:26.948327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.767 [2024-07-24 22:15:26.957694] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.767 [2024-07-24 22:15:26.958204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.767 [2024-07-24 22:15:26.958257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.767 [2024-07-24 22:15:26.958290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.767 [2024-07-24 22:15:26.958618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.767 [2024-07-24 22:15:26.958780] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.767 [2024-07-24 22:15:26.958791] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.767 [2024-07-24 22:15:26.958800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.767 [2024-07-24 22:15:26.961259] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.768 [2024-07-24 22:15:26.970484] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.768 [2024-07-24 22:15:26.970895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.768 [2024-07-24 22:15:26.970942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:47.768 [2024-07-24 22:15:26.970977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:47.768 [2024-07-24 22:15:26.971502] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:47.768 [2024-07-24 22:15:26.971661] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.768 [2024-07-24 22:15:26.971672] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.768 [2024-07-24 22:15:26.971681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.768 [2024-07-24 22:15:26.974297] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.029 [2024-07-24 22:15:26.983310] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.029 [2024-07-24 22:15:26.983743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.029 [2024-07-24 22:15:26.983762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.029 [2024-07-24 22:15:26.983771] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.029 [2024-07-24 22:15:26.983929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.029 [2024-07-24 22:15:26.984085] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.029 [2024-07-24 22:15:26.984095] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.029 [2024-07-24 22:15:26.984104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.029 [2024-07-24 22:15:26.986646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.029 [2024-07-24 22:15:26.996133] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.029 [2024-07-24 22:15:26.996517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.029 [2024-07-24 22:15:26.996538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.029 [2024-07-24 22:15:26.996551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.029 [2024-07-24 22:15:26.996727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.029 [2024-07-24 22:15:26.996894] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.029 [2024-07-24 22:15:26.996909] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.029 [2024-07-24 22:15:26.996925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.029 [2024-07-24 22:15:26.999630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.029 [2024-07-24 22:15:27.009184] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.029 [2024-07-24 22:15:27.009619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.029 [2024-07-24 22:15:27.009639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.029 [2024-07-24 22:15:27.009651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.029 [2024-07-24 22:15:27.009838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.029 [2024-07-24 22:15:27.010007] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.029 [2024-07-24 22:15:27.010019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.029 [2024-07-24 22:15:27.010028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.029 [2024-07-24 22:15:27.012543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.029 [2024-07-24 22:15:27.021896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.029 [2024-07-24 22:15:27.022420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.029 [2024-07-24 22:15:27.022473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.029 [2024-07-24 22:15:27.022506] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.029 [2024-07-24 22:15:27.023014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.029 [2024-07-24 22:15:27.023172] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.029 [2024-07-24 22:15:27.023183] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.029 [2024-07-24 22:15:27.023192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.029 [2024-07-24 22:15:27.025651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.029 [2024-07-24 22:15:27.034748] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.029 [2024-07-24 22:15:27.035265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.029 [2024-07-24 22:15:27.035282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.029 [2024-07-24 22:15:27.035292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.029 [2024-07-24 22:15:27.035448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.029 [2024-07-24 22:15:27.035605] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.029 [2024-07-24 22:15:27.035616] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.029 [2024-07-24 22:15:27.035624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.029 [2024-07-24 22:15:27.038174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.029 [2024-07-24 22:15:27.047389] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.029 [2024-07-24 22:15:27.047881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.029 [2024-07-24 22:15:27.047936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.029 [2024-07-24 22:15:27.047969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.029 [2024-07-24 22:15:27.048509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.029 [2024-07-24 22:15:27.048667] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.029 [2024-07-24 22:15:27.048678] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.029 [2024-07-24 22:15:27.048687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.029 [2024-07-24 22:15:27.051239] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.029 [2024-07-24 22:15:27.060103] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.029 [2024-07-24 22:15:27.060558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.029 [2024-07-24 22:15:27.060611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.029 [2024-07-24 22:15:27.060645] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.029 [2024-07-24 22:15:27.061249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.029 [2024-07-24 22:15:27.061856] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.029 [2024-07-24 22:15:27.061887] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.029 [2024-07-24 22:15:27.061896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.029 [2024-07-24 22:15:27.064423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.029 [2024-07-24 22:15:27.072853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.029 [2024-07-24 22:15:27.073331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.029 [2024-07-24 22:15:27.073349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.029 [2024-07-24 22:15:27.073358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.029 [2024-07-24 22:15:27.073515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.029 [2024-07-24 22:15:27.073673] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.029 [2024-07-24 22:15:27.073684] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.029 [2024-07-24 22:15:27.073692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.029 [2024-07-24 22:15:27.076239] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.029 [2024-07-24 22:15:27.085545] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.030 [2024-07-24 22:15:27.086050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.030 [2024-07-24 22:15:27.086104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.030 [2024-07-24 22:15:27.086137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.030 [2024-07-24 22:15:27.086743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.030 [2024-07-24 22:15:27.087059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.030 [2024-07-24 22:15:27.087071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.030 [2024-07-24 22:15:27.087079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.030 [2024-07-24 22:15:27.089536] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.030 [2024-07-24 22:15:27.098321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.030 [2024-07-24 22:15:27.098853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.030 [2024-07-24 22:15:27.098906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.030 [2024-07-24 22:15:27.098945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.030 [2024-07-24 22:15:27.099535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.030 [2024-07-24 22:15:27.099772] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.030 [2024-07-24 22:15:27.099783] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.030 [2024-07-24 22:15:27.099792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.030 [2024-07-24 22:15:27.102294] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.030 [2024-07-24 22:15:27.111079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.030 [2024-07-24 22:15:27.111569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.030 [2024-07-24 22:15:27.111622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.030 [2024-07-24 22:15:27.111655] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.030 [2024-07-24 22:15:27.112056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.030 [2024-07-24 22:15:27.112215] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.030 [2024-07-24 22:15:27.112226] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.030 [2024-07-24 22:15:27.112234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.030 [2024-07-24 22:15:27.114771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.030 [2024-07-24 22:15:27.123847] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.030 [2024-07-24 22:15:27.124360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.030 [2024-07-24 22:15:27.124378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.030 [2024-07-24 22:15:27.124387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.030 [2024-07-24 22:15:27.124543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.030 [2024-07-24 22:15:27.124700] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.030 [2024-07-24 22:15:27.124710] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.030 [2024-07-24 22:15:27.124725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.030 [2024-07-24 22:15:27.127217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.030 [2024-07-24 22:15:27.136595] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.030 [2024-07-24 22:15:27.137045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.030 [2024-07-24 22:15:27.137099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.030 [2024-07-24 22:15:27.137131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.030 [2024-07-24 22:15:27.137736] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.030 [2024-07-24 22:15:27.138264] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.030 [2024-07-24 22:15:27.138279] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.030 [2024-07-24 22:15:27.138288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.030 [2024-07-24 22:15:27.140817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.030 [2024-07-24 22:15:27.149372] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.030 [2024-07-24 22:15:27.149868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.030 [2024-07-24 22:15:27.149887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.030 [2024-07-24 22:15:27.149896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.030 [2024-07-24 22:15:27.150053] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.030 [2024-07-24 22:15:27.150211] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.030 [2024-07-24 22:15:27.150222] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.030 [2024-07-24 22:15:27.150230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.030 [2024-07-24 22:15:27.152771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.030 [2024-07-24 22:15:27.162051] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.030 [2024-07-24 22:15:27.162569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.030 [2024-07-24 22:15:27.162621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.030 [2024-07-24 22:15:27.162654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.030 [2024-07-24 22:15:27.163220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.030 [2024-07-24 22:15:27.163387] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.030 [2024-07-24 22:15:27.163399] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.030 [2024-07-24 22:15:27.163408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.030 [2024-07-24 22:15:27.165904] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.030 [2024-07-24 22:15:27.174739] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.030 [2024-07-24 22:15:27.175202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.030 [2024-07-24 22:15:27.175255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.030 [2024-07-24 22:15:27.175287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.030 [2024-07-24 22:15:27.175893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.030 [2024-07-24 22:15:27.176374] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.030 [2024-07-24 22:15:27.176385] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.030 [2024-07-24 22:15:27.176394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.030 [2024-07-24 22:15:27.178888] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.030 [2024-07-24 22:15:27.187525] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.030 [2024-07-24 22:15:27.188014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.030 [2024-07-24 22:15:27.188068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.030 [2024-07-24 22:15:27.188100] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.030 [2024-07-24 22:15:27.188689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.030 [2024-07-24 22:15:27.189232] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.030 [2024-07-24 22:15:27.189244] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.030 [2024-07-24 22:15:27.189253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.030 [2024-07-24 22:15:27.191757] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.030 [2024-07-24 22:15:27.200313] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.030 [2024-07-24 22:15:27.200785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.030 [2024-07-24 22:15:27.200803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.030 [2024-07-24 22:15:27.200813] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.030 [2024-07-24 22:15:27.200970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.030 [2024-07-24 22:15:27.201128] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.030 [2024-07-24 22:15:27.201139] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.030 [2024-07-24 22:15:27.201147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.030 [2024-07-24 22:15:27.203667] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.030 [2024-07-24 22:15:27.213082] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.030 [2024-07-24 22:15:27.213590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.031 [2024-07-24 22:15:27.213642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.031 [2024-07-24 22:15:27.213675] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.031 [2024-07-24 22:15:27.214173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.031 [2024-07-24 22:15:27.214341] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.031 [2024-07-24 22:15:27.214352] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.031 [2024-07-24 22:15:27.214361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.031 [2024-07-24 22:15:27.216943] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.031 [2024-07-24 22:15:27.225882] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.031 [2024-07-24 22:15:27.226338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.031 [2024-07-24 22:15:27.226390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.031 [2024-07-24 22:15:27.226423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.031 [2024-07-24 22:15:27.227043] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.031 [2024-07-24 22:15:27.227312] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.031 [2024-07-24 22:15:27.227323] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.031 [2024-07-24 22:15:27.227332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.031 [2024-07-24 22:15:27.229875] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.031 [2024-07-24 22:15:27.238683] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.031 [2024-07-24 22:15:27.239185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.031 [2024-07-24 22:15:27.239238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.031 [2024-07-24 22:15:27.239270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.031 [2024-07-24 22:15:27.239875] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.031 [2024-07-24 22:15:27.240463] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.031 [2024-07-24 22:15:27.240477] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.031 [2024-07-24 22:15:27.240487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.292 [2024-07-24 22:15:27.243085] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.292 [2024-07-24 22:15:27.251410] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.292 [2024-07-24 22:15:27.251851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.292 [2024-07-24 22:15:27.251873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.292 [2024-07-24 22:15:27.251887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.292 [2024-07-24 22:15:27.252058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.292 [2024-07-24 22:15:27.252227] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.292 [2024-07-24 22:15:27.252244] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.292 [2024-07-24 22:15:27.252257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.292 [2024-07-24 22:15:27.254978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.292 [2024-07-24 22:15:27.264322] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.292 [2024-07-24 22:15:27.264839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.292 [2024-07-24 22:15:27.264895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.292 [2024-07-24 22:15:27.264928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.292 [2024-07-24 22:15:27.265207] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.292 [2024-07-24 22:15:27.265366] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.292 [2024-07-24 22:15:27.265377] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.292 [2024-07-24 22:15:27.265389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.292 [2024-07-24 22:15:27.267968] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.292 [2024-07-24 22:15:27.277319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.292 [2024-07-24 22:15:27.277768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.292 [2024-07-24 22:15:27.277787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.292 [2024-07-24 22:15:27.277798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.292 [2024-07-24 22:15:27.278370] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.292 [2024-07-24 22:15:27.278608] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.292 [2024-07-24 22:15:27.278624] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.292 [2024-07-24 22:15:27.278636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.292 [2024-07-24 22:15:27.282386] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.292 [2024-07-24 22:15:27.290570] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.292 [2024-07-24 22:15:27.291035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.292 [2024-07-24 22:15:27.291086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.292 [2024-07-24 22:15:27.291120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.292 [2024-07-24 22:15:27.291650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.292 [2024-07-24 22:15:27.291821] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.292 [2024-07-24 22:15:27.291834] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.292 [2024-07-24 22:15:27.291843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.292 [2024-07-24 22:15:27.294363] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.292 [2024-07-24 22:15:27.303293] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.292 [2024-07-24 22:15:27.303648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.292 [2024-07-24 22:15:27.303666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.292 [2024-07-24 22:15:27.303675] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.292 [2024-07-24 22:15:27.303839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.292 [2024-07-24 22:15:27.303996] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.292 [2024-07-24 22:15:27.304007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.292 [2024-07-24 22:15:27.304015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.292 [2024-07-24 22:15:27.306554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.292 [2024-07-24 22:15:27.316055] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.292 [2024-07-24 22:15:27.316486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.292 [2024-07-24 22:15:27.316507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.292 [2024-07-24 22:15:27.316516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.292 [2024-07-24 22:15:27.316672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.292 [2024-07-24 22:15:27.316835] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.292 [2024-07-24 22:15:27.316846] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.292 [2024-07-24 22:15:27.316855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.292 [2024-07-24 22:15:27.319458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.292 [2024-07-24 22:15:27.328792] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.292 [2024-07-24 22:15:27.329302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.292 [2024-07-24 22:15:27.329354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.292 [2024-07-24 22:15:27.329387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.292 [2024-07-24 22:15:27.329814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.292 [2024-07-24 22:15:27.329980] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.292 [2024-07-24 22:15:27.329991] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.292 [2024-07-24 22:15:27.330001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.292 [2024-07-24 22:15:27.332520] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.292 [2024-07-24 22:15:27.341546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.292 [2024-07-24 22:15:27.341978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.292 [2024-07-24 22:15:27.341997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.292 [2024-07-24 22:15:27.342006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.292 [2024-07-24 22:15:27.342162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.292 [2024-07-24 22:15:27.342318] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.292 [2024-07-24 22:15:27.342329] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.292 [2024-07-24 22:15:27.342337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.292 [2024-07-24 22:15:27.344889] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.292 [2024-07-24 22:15:27.354349] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.292 [2024-07-24 22:15:27.354775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.292 [2024-07-24 22:15:27.354794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.292 [2024-07-24 22:15:27.354803] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.293 [2024-07-24 22:15:27.354959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.293 [2024-07-24 22:15:27.355119] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.293 [2024-07-24 22:15:27.355130] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.293 [2024-07-24 22:15:27.355138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.293 [2024-07-24 22:15:27.357723] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.293 [2024-07-24 22:15:27.367105] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.293 [2024-07-24 22:15:27.367553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.293 [2024-07-24 22:15:27.367571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.293 [2024-07-24 22:15:27.367581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.293 [2024-07-24 22:15:27.367744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.293 [2024-07-24 22:15:27.367903] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.293 [2024-07-24 22:15:27.367914] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.293 [2024-07-24 22:15:27.367922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.293 [2024-07-24 22:15:27.370433] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.293 [2024-07-24 22:15:27.380124] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.293 [2024-07-24 22:15:27.380596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.293 [2024-07-24 22:15:27.380614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.293 [2024-07-24 22:15:27.380624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.293 [2024-07-24 22:15:27.380801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.293 [2024-07-24 22:15:27.380973] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.293 [2024-07-24 22:15:27.380985] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.293 [2024-07-24 22:15:27.380994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.293 [2024-07-24 22:15:27.383660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.293 [2024-07-24 22:15:27.393129] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.293 [2024-07-24 22:15:27.393625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.293 [2024-07-24 22:15:27.393678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.293 [2024-07-24 22:15:27.393710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.293 [2024-07-24 22:15:27.394313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.293 [2024-07-24 22:15:27.394742] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.293 [2024-07-24 22:15:27.394754] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.293 [2024-07-24 22:15:27.394763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.293 [2024-07-24 22:15:27.397438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.293 [2024-07-24 22:15:27.406106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.293 [2024-07-24 22:15:27.406554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.293 [2024-07-24 22:15:27.406573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.293 [2024-07-24 22:15:27.406583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.293 [2024-07-24 22:15:27.406759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.293 [2024-07-24 22:15:27.406931] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.293 [2024-07-24 22:15:27.406943] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.293 [2024-07-24 22:15:27.406952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.293 [2024-07-24 22:15:27.409621] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.293 [2024-07-24 22:15:27.419062] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.293 [2024-07-24 22:15:27.419549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.293 [2024-07-24 22:15:27.419568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.293 [2024-07-24 22:15:27.419578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.293 [2024-07-24 22:15:27.419750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.293 [2024-07-24 22:15:27.419923] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.293 [2024-07-24 22:15:27.419934] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.293 [2024-07-24 22:15:27.419942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.293 [2024-07-24 22:15:27.422448] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.293 [2024-07-24 22:15:27.431845] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.293 [2024-07-24 22:15:27.432320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.293 [2024-07-24 22:15:27.432338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.293 [2024-07-24 22:15:27.432347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.293 [2024-07-24 22:15:27.432503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.293 [2024-07-24 22:15:27.432660] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.293 [2024-07-24 22:15:27.432672] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.293 [2024-07-24 22:15:27.432680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.293 [2024-07-24 22:15:27.435229] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.293 [2024-07-24 22:15:27.444593] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.293 [2024-07-24 22:15:27.445050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.293 [2024-07-24 22:15:27.445067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.293 [2024-07-24 22:15:27.445080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.293 [2024-07-24 22:15:27.445237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.293 [2024-07-24 22:15:27.445395] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.293 [2024-07-24 22:15:27.445405] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.293 [2024-07-24 22:15:27.445414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.293 [2024-07-24 22:15:27.447964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.293 [2024-07-24 22:15:27.457281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.293 [2024-07-24 22:15:27.457638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.293 [2024-07-24 22:15:27.457656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.293 [2024-07-24 22:15:27.457665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.293 [2024-07-24 22:15:27.457848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.293 [2024-07-24 22:15:27.458014] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.293 [2024-07-24 22:15:27.458025] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.293 [2024-07-24 22:15:27.458034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.293 [2024-07-24 22:15:27.460546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.293 [2024-07-24 22:15:27.470096] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.293 [2024-07-24 22:15:27.470497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.293 [2024-07-24 22:15:27.470515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.293 [2024-07-24 22:15:27.470524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.293 [2024-07-24 22:15:27.470681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.293 [2024-07-24 22:15:27.470845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.293 [2024-07-24 22:15:27.470857] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.293 [2024-07-24 22:15:27.470865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.293 [2024-07-24 22:15:27.473410] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.293 [2024-07-24 22:15:27.482876] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.293 [2024-07-24 22:15:27.483303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.293 [2024-07-24 22:15:27.483321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.293 [2024-07-24 22:15:27.483330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.293 [2024-07-24 22:15:27.483488] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.293 [2024-07-24 22:15:27.483646] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.294 [2024-07-24 22:15:27.483659] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.294 [2024-07-24 22:15:27.483668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.294 [2024-07-24 22:15:27.486211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.294 [2024-07-24 22:15:27.495593] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.294 [2024-07-24 22:15:27.496039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.294 [2024-07-24 22:15:27.496093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.294 [2024-07-24 22:15:27.496126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.294 [2024-07-24 22:15:27.496698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.294 [2024-07-24 22:15:27.496861] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.294 [2024-07-24 22:15:27.496872] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.294 [2024-07-24 22:15:27.496881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.294 [2024-07-24 22:15:27.499478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.554 [2024-07-24 22:15:27.508529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.554 [2024-07-24 22:15:27.508972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.554 [2024-07-24 22:15:27.508993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.554 [2024-07-24 22:15:27.509006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.554 [2024-07-24 22:15:27.509180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.554 [2024-07-24 22:15:27.509357] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.554 [2024-07-24 22:15:27.509371] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.554 [2024-07-24 22:15:27.509383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.554 [2024-07-24 22:15:27.512118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.554 [2024-07-24 22:15:27.521525] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.554 [2024-07-24 22:15:27.522041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.554 [2024-07-24 22:15:27.522095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.554 [2024-07-24 22:15:27.522128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.554 [2024-07-24 22:15:27.522742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.554 [2024-07-24 22:15:27.522914] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.554 [2024-07-24 22:15:27.522926] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.554 [2024-07-24 22:15:27.522935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.554 [2024-07-24 22:15:27.525608] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.554 [2024-07-24 22:15:27.534446] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.554 [2024-07-24 22:15:27.534905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.554 [2024-07-24 22:15:27.534925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.554 [2024-07-24 22:15:27.534935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.554 [2024-07-24 22:15:27.535106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.554 [2024-07-24 22:15:27.535277] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.554 [2024-07-24 22:15:27.535288] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.554 [2024-07-24 22:15:27.535298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.554 [2024-07-24 22:15:27.537971] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.554 [2024-07-24 22:15:27.547214] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.554 [2024-07-24 22:15:27.547540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.554 [2024-07-24 22:15:27.547558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.554 [2024-07-24 22:15:27.547567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.554 [2024-07-24 22:15:27.547729] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.554 [2024-07-24 22:15:27.547912] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.554 [2024-07-24 22:15:27.547923] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.554 [2024-07-24 22:15:27.547932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.554 [2024-07-24 22:15:27.550505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.554 [2024-07-24 22:15:27.560026] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.554 [2024-07-24 22:15:27.560510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.554 [2024-07-24 22:15:27.560561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.554 [2024-07-24 22:15:27.560594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.554 [2024-07-24 22:15:27.561102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.554 [2024-07-24 22:15:27.561261] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.554 [2024-07-24 22:15:27.561272] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.554 [2024-07-24 22:15:27.561282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.554 [2024-07-24 22:15:27.563820] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.554 [2024-07-24 22:15:27.572903] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.554 [2024-07-24 22:15:27.573413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.554 [2024-07-24 22:15:27.573466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.554 [2024-07-24 22:15:27.573500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.554 [2024-07-24 22:15:27.573934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.554 [2024-07-24 22:15:27.574093] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.554 [2024-07-24 22:15:27.574104] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.554 [2024-07-24 22:15:27.574113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.554 [2024-07-24 22:15:27.576631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.554 [2024-07-24 22:15:27.585677] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.554 [2024-07-24 22:15:27.586047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.554 [2024-07-24 22:15:27.586065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.554 [2024-07-24 22:15:27.586075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.554 [2024-07-24 22:15:27.586231] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.554 [2024-07-24 22:15:27.586389] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.555 [2024-07-24 22:15:27.586399] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.555 [2024-07-24 22:15:27.586409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.555 [2024-07-24 22:15:27.588957] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.555 [2024-07-24 22:15:27.598468] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.555 [2024-07-24 22:15:27.598838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.555 [2024-07-24 22:15:27.598857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.555 [2024-07-24 22:15:27.598867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.555 [2024-07-24 22:15:27.599035] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.555 [2024-07-24 22:15:27.599192] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.555 [2024-07-24 22:15:27.599203] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.555 [2024-07-24 22:15:27.599212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.555 [2024-07-24 22:15:27.601758] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.555 [2024-07-24 22:15:27.611209] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.555 [2024-07-24 22:15:27.611640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.555 [2024-07-24 22:15:27.611658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.555 [2024-07-24 22:15:27.611667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.555 [2024-07-24 22:15:27.611851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.555 [2024-07-24 22:15:27.612016] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.555 [2024-07-24 22:15:27.612028] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.555 [2024-07-24 22:15:27.612040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.555 [2024-07-24 22:15:27.614550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.555 [2024-07-24 22:15:27.623949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.555 [2024-07-24 22:15:27.624423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.555 [2024-07-24 22:15:27.624441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.555 [2024-07-24 22:15:27.624450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.555 [2024-07-24 22:15:27.624606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.555 [2024-07-24 22:15:27.624768] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.555 [2024-07-24 22:15:27.624780] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.555 [2024-07-24 22:15:27.624789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.555 [2024-07-24 22:15:27.627333] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.555 [2024-07-24 22:15:27.636733] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.555 [2024-07-24 22:15:27.637196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.555 [2024-07-24 22:15:27.637248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.555 [2024-07-24 22:15:27.637281] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.555 [2024-07-24 22:15:27.637886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.555 [2024-07-24 22:15:27.638485] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.555 [2024-07-24 22:15:27.638496] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.555 [2024-07-24 22:15:27.638505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.555 [2024-07-24 22:15:27.640993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.555 [2024-07-24 22:15:27.649656] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.555 [2024-07-24 22:15:27.650115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.555 [2024-07-24 22:15:27.650134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.555 [2024-07-24 22:15:27.650144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.555 [2024-07-24 22:15:27.650301] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.555 [2024-07-24 22:15:27.650460] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.555 [2024-07-24 22:15:27.650471] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.555 [2024-07-24 22:15:27.650479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.555 [2024-07-24 22:15:27.652972] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.555 [2024-07-24 22:15:27.662489] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.555 [2024-07-24 22:15:27.662892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.555 [2024-07-24 22:15:27.662914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.555 [2024-07-24 22:15:27.662923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.555 [2024-07-24 22:15:27.663080] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.555 [2024-07-24 22:15:27.663236] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.555 [2024-07-24 22:15:27.663247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.555 [2024-07-24 22:15:27.663255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.555 [2024-07-24 22:15:27.665797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.555 [2024-07-24 22:15:27.675174] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.555 [2024-07-24 22:15:27.675548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.555 [2024-07-24 22:15:27.675567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.555 [2024-07-24 22:15:27.675577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.555 [2024-07-24 22:15:27.675749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.555 [2024-07-24 22:15:27.675921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.555 [2024-07-24 22:15:27.675932] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.555 [2024-07-24 22:15:27.675941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.555 [2024-07-24 22:15:27.678477] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.555 [2024-07-24 22:15:27.687925] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.555 [2024-07-24 22:15:27.688365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.555 [2024-07-24 22:15:27.688383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.555 [2024-07-24 22:15:27.688392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.555 [2024-07-24 22:15:27.688549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.555 [2024-07-24 22:15:27.688706] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.555 [2024-07-24 22:15:27.688722] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.555 [2024-07-24 22:15:27.688731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.555 [2024-07-24 22:15:27.691272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.555 [2024-07-24 22:15:27.700636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.555 [2024-07-24 22:15:27.701114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.555 [2024-07-24 22:15:27.701133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.555 [2024-07-24 22:15:27.701142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.555 [2024-07-24 22:15:27.701299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.555 [2024-07-24 22:15:27.701459] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.555 [2024-07-24 22:15:27.701470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.555 [2024-07-24 22:15:27.701478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.555 [2024-07-24 22:15:27.704075] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.555 [2024-07-24 22:15:27.713420] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.555 [2024-07-24 22:15:27.713831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.555 [2024-07-24 22:15:27.713885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.555 [2024-07-24 22:15:27.713918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.555 [2024-07-24 22:15:27.714463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.555 [2024-07-24 22:15:27.714621] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.556 [2024-07-24 22:15:27.714631] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.556 [2024-07-24 22:15:27.714641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.556 [2024-07-24 22:15:27.717189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.556 [2024-07-24 22:15:27.726273] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.556 [2024-07-24 22:15:27.726785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.556 [2024-07-24 22:15:27.726839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.556 [2024-07-24 22:15:27.726871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.556 [2024-07-24 22:15:27.727247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.556 [2024-07-24 22:15:27.727405] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.556 [2024-07-24 22:15:27.727416] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.556 [2024-07-24 22:15:27.727424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.556 [2024-07-24 22:15:27.729965] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.556 [2024-07-24 22:15:27.739030] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.556 [2024-07-24 22:15:27.739473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.556 [2024-07-24 22:15:27.739524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.556 [2024-07-24 22:15:27.739557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.556 [2024-07-24 22:15:27.740165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.556 [2024-07-24 22:15:27.740400] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.556 [2024-07-24 22:15:27.740411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.556 [2024-07-24 22:15:27.740420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.556 [2024-07-24 22:15:27.742902] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.556 [2024-07-24 22:15:27.751821] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.556 [2024-07-24 22:15:27.752284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.556 [2024-07-24 22:15:27.752302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.556 [2024-07-24 22:15:27.752312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.556 [2024-07-24 22:15:27.752477] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.556 [2024-07-24 22:15:27.752643] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.556 [2024-07-24 22:15:27.752654] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.556 [2024-07-24 22:15:27.752663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.556 [2024-07-24 22:15:27.755141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.556 [2024-07-24 22:15:27.764649] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.556 [2024-07-24 22:15:27.765043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.556 [2024-07-24 22:15:27.765063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.556 [2024-07-24 22:15:27.765073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.556 [2024-07-24 22:15:27.765243] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.556 [2024-07-24 22:15:27.765413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.556 [2024-07-24 22:15:27.765424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.556 [2024-07-24 22:15:27.765434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.816 [2024-07-24 22:15:27.768164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.816 [2024-07-24 22:15:27.777441] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.816 [2024-07-24 22:15:27.777856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.816 [2024-07-24 22:15:27.777876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.816 [2024-07-24 22:15:27.777887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.816 [2024-07-24 22:15:27.778055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.816 [2024-07-24 22:15:27.778221] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.816 [2024-07-24 22:15:27.778233] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.816 [2024-07-24 22:15:27.778242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.816 [2024-07-24 22:15:27.780764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.816 [2024-07-24 22:15:27.790307] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.816 [2024-07-24 22:15:27.790773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.816 [2024-07-24 22:15:27.790803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.816 [2024-07-24 22:15:27.790817] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.816 [2024-07-24 22:15:27.790976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.816 [2024-07-24 22:15:27.791134] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.816 [2024-07-24 22:15:27.791145] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.816 [2024-07-24 22:15:27.791153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.816 [2024-07-24 22:15:27.793616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.816 [2024-07-24 22:15:27.803165] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.816 [2024-07-24 22:15:27.803616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.816 [2024-07-24 22:15:27.803669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.816 [2024-07-24 22:15:27.803701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.816 [2024-07-24 22:15:27.804307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.816 [2024-07-24 22:15:27.804867] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.817 [2024-07-24 22:15:27.804878] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.817 [2024-07-24 22:15:27.804886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.817 [2024-07-24 22:15:27.808503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.817 [2024-07-24 22:15:27.816612] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.817 [2024-07-24 22:15:27.817127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.817 [2024-07-24 22:15:27.817181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.817 [2024-07-24 22:15:27.817213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.817 [2024-07-24 22:15:27.817708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.817 [2024-07-24 22:15:27.817906] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.817 [2024-07-24 22:15:27.817917] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.817 [2024-07-24 22:15:27.817926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.817 [2024-07-24 22:15:27.820441] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.817 [2024-07-24 22:15:27.829359] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.817 [2024-07-24 22:15:27.829852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.817 [2024-07-24 22:15:27.829906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.817 [2024-07-24 22:15:27.829939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.817 [2024-07-24 22:15:27.830529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.817 [2024-07-24 22:15:27.830907] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.817 [2024-07-24 22:15:27.830922] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.817 [2024-07-24 22:15:27.830931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.817 [2024-07-24 22:15:27.833448] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.817 [2024-07-24 22:15:27.842134] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.817 [2024-07-24 22:15:27.842634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.817 [2024-07-24 22:15:27.842652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.817 [2024-07-24 22:15:27.842661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.817 [2024-07-24 22:15:27.842844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.817 [2024-07-24 22:15:27.843009] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.817 [2024-07-24 22:15:27.843021] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.817 [2024-07-24 22:15:27.843029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.817 [2024-07-24 22:15:27.845541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.817 [2024-07-24 22:15:27.854911] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.817 [2024-07-24 22:15:27.855416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.817 [2024-07-24 22:15:27.855469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.817 [2024-07-24 22:15:27.855502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.817 [2024-07-24 22:15:27.855819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.817 [2024-07-24 22:15:27.855986] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.817 [2024-07-24 22:15:27.855997] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.817 [2024-07-24 22:15:27.856006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.817 [2024-07-24 22:15:27.858513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.817 [2024-07-24 22:15:27.867596] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.817 [2024-07-24 22:15:27.868109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.817 [2024-07-24 22:15:27.868162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.817 [2024-07-24 22:15:27.868194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.817 [2024-07-24 22:15:27.868757] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.817 [2024-07-24 22:15:27.868940] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.817 [2024-07-24 22:15:27.868951] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.817 [2024-07-24 22:15:27.868962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.817 [2024-07-24 22:15:27.871477] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.817 [2024-07-24 22:15:27.880332] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.817 [2024-07-24 22:15:27.880784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.817 [2024-07-24 22:15:27.880838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.817 [2024-07-24 22:15:27.880871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.817 [2024-07-24 22:15:27.881463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.817 [2024-07-24 22:15:27.882056] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.817 [2024-07-24 22:15:27.882068] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.817 [2024-07-24 22:15:27.882077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.817 [2024-07-24 22:15:27.884585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.817 [2024-07-24 22:15:27.893085] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.817 [2024-07-24 22:15:27.893591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.817 [2024-07-24 22:15:27.893641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.817 [2024-07-24 22:15:27.893673] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.817 [2024-07-24 22:15:27.894111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.817 [2024-07-24 22:15:27.894279] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.817 [2024-07-24 22:15:27.894290] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.817 [2024-07-24 22:15:27.894300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.817 [2024-07-24 22:15:27.896964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.817 [2024-07-24 22:15:27.905847] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.817 [2024-07-24 22:15:27.906375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.817 [2024-07-24 22:15:27.906430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.817 [2024-07-24 22:15:27.906462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.817 [2024-07-24 22:15:27.906960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.817 [2024-07-24 22:15:27.907128] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.817 [2024-07-24 22:15:27.907139] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.817 [2024-07-24 22:15:27.907148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.817 [2024-07-24 22:15:27.909653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.817 [2024-07-24 22:15:27.918593] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.817 [2024-07-24 22:15:27.919030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.817 [2024-07-24 22:15:27.919049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.817 [2024-07-24 22:15:27.919058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.817 [2024-07-24 22:15:27.919218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.817 [2024-07-24 22:15:27.919376] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.817 [2024-07-24 22:15:27.919386] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.817 [2024-07-24 22:15:27.919395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.817 [2024-07-24 22:15:27.921943] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.817 [2024-07-24 22:15:27.931336] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.817 [2024-07-24 22:15:27.931762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.817 [2024-07-24 22:15:27.931781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.817 [2024-07-24 22:15:27.931790] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.817 [2024-07-24 22:15:27.931947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.818 [2024-07-24 22:15:27.932104] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.818 [2024-07-24 22:15:27.932115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.818 [2024-07-24 22:15:27.932123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.818 [2024-07-24 22:15:27.934667] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.818 [2024-07-24 22:15:27.944125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.818 [2024-07-24 22:15:27.944615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.818 [2024-07-24 22:15:27.944666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.818 [2024-07-24 22:15:27.944698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.818 [2024-07-24 22:15:27.945095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.818 [2024-07-24 22:15:27.945262] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.818 [2024-07-24 22:15:27.945273] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.818 [2024-07-24 22:15:27.945283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.818 [2024-07-24 22:15:27.947836] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.818 [2024-07-24 22:15:27.956830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.818 [2024-07-24 22:15:27.957232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.818 [2024-07-24 22:15:27.957250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.818 [2024-07-24 22:15:27.957259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.818 [2024-07-24 22:15:27.957416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.818 [2024-07-24 22:15:27.957574] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.818 [2024-07-24 22:15:27.957585] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.818 [2024-07-24 22:15:27.957597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.818 [2024-07-24 22:15:27.960138] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.818 [2024-07-24 22:15:27.969554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.818 [2024-07-24 22:15:27.969955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.818 [2024-07-24 22:15:27.970007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.818 [2024-07-24 22:15:27.970039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.818 [2024-07-24 22:15:27.970632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.818 [2024-07-24 22:15:27.971166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.818 [2024-07-24 22:15:27.971178] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.818 [2024-07-24 22:15:27.971186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.818 [2024-07-24 22:15:27.973686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.818 [2024-07-24 22:15:27.982332] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.818 [2024-07-24 22:15:27.982824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.818 [2024-07-24 22:15:27.982877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.818 [2024-07-24 22:15:27.982909] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.818 [2024-07-24 22:15:27.983320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.818 [2024-07-24 22:15:27.983478] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.818 [2024-07-24 22:15:27.983489] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.818 [2024-07-24 22:15:27.983497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.818 [2024-07-24 22:15:27.986045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.818 [2024-07-24 22:15:27.995128] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.818 [2024-07-24 22:15:27.995640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.818 [2024-07-24 22:15:27.995691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.818 [2024-07-24 22:15:27.995740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.818 [2024-07-24 22:15:27.996199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.818 [2024-07-24 22:15:27.996366] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.818 [2024-07-24 22:15:27.996377] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.818 [2024-07-24 22:15:27.996386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.818 [2024-07-24 22:15:27.998880] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.818 [2024-07-24 22:15:28.007878] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.818 [2024-07-24 22:15:28.008356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.818 [2024-07-24 22:15:28.008411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.818 [2024-07-24 22:15:28.008444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.818 [2024-07-24 22:15:28.009050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.818 [2024-07-24 22:15:28.009444] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.818 [2024-07-24 22:15:28.009456] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.818 [2024-07-24 22:15:28.009465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.818 [2024-07-24 22:15:28.011959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.818 [2024-07-24 22:15:28.020666] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.818 [2024-07-24 22:15:28.021173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.818 [2024-07-24 22:15:28.021193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:48.818 [2024-07-24 22:15:28.021205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:48.818 [2024-07-24 22:15:28.021366] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:48.818 [2024-07-24 22:15:28.021530] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.818 [2024-07-24 22:15:28.021544] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.818 [2024-07-24 22:15:28.021554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.818 [2024-07-24 22:15:28.024302] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.079 [2024-07-24 22:15:28.033435] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.079 [2024-07-24 22:15:28.033917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.079 [2024-07-24 22:15:28.033938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.079 [2024-07-24 22:15:28.033948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.079 [2024-07-24 22:15:28.034114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.079 [2024-07-24 22:15:28.034280] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.080 [2024-07-24 22:15:28.034292] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.080 [2024-07-24 22:15:28.034300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.080 [2024-07-24 22:15:28.036964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.080 [2024-07-24 22:15:28.046243] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.080 [2024-07-24 22:15:28.046767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.080 [2024-07-24 22:15:28.046822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.080 [2024-07-24 22:15:28.046854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.080 [2024-07-24 22:15:28.047269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.080 [2024-07-24 22:15:28.047430] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.080 [2024-07-24 22:15:28.047441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.080 [2024-07-24 22:15:28.047449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.080 [2024-07-24 22:15:28.051016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.080 [2024-07-24 22:15:28.059546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.080 [2024-07-24 22:15:28.060040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.080 [2024-07-24 22:15:28.060094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.080 [2024-07-24 22:15:28.060127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.080 [2024-07-24 22:15:28.060581] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.080 [2024-07-24 22:15:28.060761] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.080 [2024-07-24 22:15:28.060773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.080 [2024-07-24 22:15:28.060782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.080 [2024-07-24 22:15:28.063299] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.080 [2024-07-24 22:15:28.072235] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.080 [2024-07-24 22:15:28.072621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.080 [2024-07-24 22:15:28.072673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.080 [2024-07-24 22:15:28.072706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.080 [2024-07-24 22:15:28.073256] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.080 [2024-07-24 22:15:28.073422] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.080 [2024-07-24 22:15:28.073434] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.080 [2024-07-24 22:15:28.073442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.080 [2024-07-24 22:15:28.075984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.080 [2024-07-24 22:15:28.084911] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.080 [2024-07-24 22:15:28.085409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.080 [2024-07-24 22:15:28.085427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.080 [2024-07-24 22:15:28.085437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.080 [2024-07-24 22:15:28.085594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.080 [2024-07-24 22:15:28.085757] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.080 [2024-07-24 22:15:28.085785] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.080 [2024-07-24 22:15:28.085794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.080 [2024-07-24 22:15:28.088327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.080 [2024-07-24 22:15:28.097553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.080 [2024-07-24 22:15:28.098070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.080 [2024-07-24 22:15:28.098124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.080 [2024-07-24 22:15:28.098156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.080 [2024-07-24 22:15:28.098644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.080 [2024-07-24 22:15:28.098827] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.080 [2024-07-24 22:15:28.098839] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.080 [2024-07-24 22:15:28.098848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.080 [2024-07-24 22:15:28.101368] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.080 [2024-07-24 22:15:28.110335] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.080 [2024-07-24 22:15:28.110782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.080 [2024-07-24 22:15:28.110799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.080 [2024-07-24 22:15:28.110809] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.080 [2024-07-24 22:15:28.110967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.080 [2024-07-24 22:15:28.111124] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.080 [2024-07-24 22:15:28.111134] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.080 [2024-07-24 22:15:28.111143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.080 [2024-07-24 22:15:28.113689] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.080 [2024-07-24 22:15:28.123152] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.080 [2024-07-24 22:15:28.123563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.080 [2024-07-24 22:15:28.123581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.080 [2024-07-24 22:15:28.123590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.080 [2024-07-24 22:15:28.123752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.080 [2024-07-24 22:15:28.123910] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.080 [2024-07-24 22:15:28.123921] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.080 [2024-07-24 22:15:28.123930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.080 [2024-07-24 22:15:28.126440] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.080 [2024-07-24 22:15:28.135933] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.080 [2024-07-24 22:15:28.136428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.080 [2024-07-24 22:15:28.136447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.080 [2024-07-24 22:15:28.136459] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.080 [2024-07-24 22:15:28.136616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.080 [2024-07-24 22:15:28.136779] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.080 [2024-07-24 22:15:28.136790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.080 [2024-07-24 22:15:28.136798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.080 [2024-07-24 22:15:28.139335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.080 [2024-07-24 22:15:28.148646] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.080 [2024-07-24 22:15:28.149125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.080 [2024-07-24 22:15:28.149143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.080 [2024-07-24 22:15:28.149153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.080 [2024-07-24 22:15:28.149310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.080 [2024-07-24 22:15:28.149467] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.080 [2024-07-24 22:15:28.149478] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.080 [2024-07-24 22:15:28.149487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.080 [2024-07-24 22:15:28.152034] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.080 [2024-07-24 22:15:28.161340] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.080 [2024-07-24 22:15:28.161754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.080 [2024-07-24 22:15:28.161773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.080 [2024-07-24 22:15:28.161783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.080 [2024-07-24 22:15:28.161950] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.081 [2024-07-24 22:15:28.162116] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.081 [2024-07-24 22:15:28.162127] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.081 [2024-07-24 22:15:28.162138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.081 [2024-07-24 22:15:28.164639] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.081 [2024-07-24 22:15:28.174139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.081 [2024-07-24 22:15:28.174685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.081 [2024-07-24 22:15:28.174753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.081 [2024-07-24 22:15:28.174787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.081 [2024-07-24 22:15:28.175376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.081 [2024-07-24 22:15:28.175659] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.081 [2024-07-24 22:15:28.175673] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.081 [2024-07-24 22:15:28.175682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.081 [2024-07-24 22:15:28.178241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.081 [2024-07-24 22:15:28.186861] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.081 [2024-07-24 22:15:28.187365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.081 [2024-07-24 22:15:28.187419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.081 [2024-07-24 22:15:28.187451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.081 [2024-07-24 22:15:28.187883] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.081 [2024-07-24 22:15:28.188050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.081 [2024-07-24 22:15:28.188061] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.081 [2024-07-24 22:15:28.188070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.081 [2024-07-24 22:15:28.190582] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.081 [2024-07-24 22:15:28.199761] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.081 [2024-07-24 22:15:28.200176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.081 [2024-07-24 22:15:28.200194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.081 [2024-07-24 22:15:28.200204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.081 [2024-07-24 22:15:28.200370] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.081 [2024-07-24 22:15:28.200536] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.081 [2024-07-24 22:15:28.200547] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.081 [2024-07-24 22:15:28.200556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.081 [2024-07-24 22:15:28.203090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.081 [2024-07-24 22:15:28.212541] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.081 [2024-07-24 22:15:28.212937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.081 [2024-07-24 22:15:28.212955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.081 [2024-07-24 22:15:28.212965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.081 [2024-07-24 22:15:28.213121] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.081 [2024-07-24 22:15:28.213278] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.081 [2024-07-24 22:15:28.213289] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.081 [2024-07-24 22:15:28.213297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.081 [2024-07-24 22:15:28.215848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.081 [2024-07-24 22:15:28.225296] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.081 [2024-07-24 22:15:28.225684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.081 [2024-07-24 22:15:28.225702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.081 [2024-07-24 22:15:28.225711] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.081 [2024-07-24 22:15:28.225893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.081 [2024-07-24 22:15:28.226059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.081 [2024-07-24 22:15:28.226070] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.081 [2024-07-24 22:15:28.226079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.081 [2024-07-24 22:15:28.228591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.081 [2024-07-24 22:15:28.238092] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.081 [2024-07-24 22:15:28.238546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.081 [2024-07-24 22:15:28.238599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.081 [2024-07-24 22:15:28.238632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.081 [2024-07-24 22:15:28.239237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.081 [2024-07-24 22:15:28.239753] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.081 [2024-07-24 22:15:28.239769] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.081 [2024-07-24 22:15:28.239782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.081 [2024-07-24 22:15:28.243511] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.081 [2024-07-24 22:15:28.251353] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.081 [2024-07-24 22:15:28.251800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.081 [2024-07-24 22:15:28.251853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.081 [2024-07-24 22:15:28.251885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.081 [2024-07-24 22:15:28.252476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.081 [2024-07-24 22:15:28.252665] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.081 [2024-07-24 22:15:28.252676] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.081 [2024-07-24 22:15:28.252684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.081 [2024-07-24 22:15:28.255231] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.081 [2024-07-24 22:15:28.264017] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.081 [2024-07-24 22:15:28.264528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.081 [2024-07-24 22:15:28.264580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.081 [2024-07-24 22:15:28.264612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.081 [2024-07-24 22:15:28.265226] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.081 [2024-07-24 22:15:28.265813] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.081 [2024-07-24 22:15:28.265825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.081 [2024-07-24 22:15:28.265834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.081 [2024-07-24 22:15:28.268307] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.081 [2024-07-24 22:15:28.276745] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.081 [2024-07-24 22:15:28.277255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.081 [2024-07-24 22:15:28.277276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.081 [2024-07-24 22:15:28.277288] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.081 [2024-07-24 22:15:28.277468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.081 [2024-07-24 22:15:28.277643] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.081 [2024-07-24 22:15:28.277655] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.081 [2024-07-24 22:15:28.277667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.081 [2024-07-24 22:15:28.280419] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.081 [2024-07-24 22:15:28.289727] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.081 [2024-07-24 22:15:28.290251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.081 [2024-07-24 22:15:28.290306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.081 [2024-07-24 22:15:28.290339] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.081 [2024-07-24 22:15:28.290687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.081 [2024-07-24 22:15:28.290862] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.081 [2024-07-24 22:15:28.290875] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.082 [2024-07-24 22:15:28.290884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.343 [2024-07-24 22:15:28.293572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.343 [2024-07-24 22:15:28.302473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.343 [2024-07-24 22:15:28.302967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.343 [2024-07-24 22:15:28.303021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.343 [2024-07-24 22:15:28.303055] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.343 [2024-07-24 22:15:28.303625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.343 [2024-07-24 22:15:28.303809] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.343 [2024-07-24 22:15:28.303821] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.343 [2024-07-24 22:15:28.303834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.343 [2024-07-24 22:15:28.306352] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.343 [2024-07-24 22:15:28.315142] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.343 [2024-07-24 22:15:28.315652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.343 [2024-07-24 22:15:28.315705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.343 [2024-07-24 22:15:28.315754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.343 [2024-07-24 22:15:28.316173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.343 [2024-07-24 22:15:28.316339] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.343 [2024-07-24 22:15:28.316351] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.343 [2024-07-24 22:15:28.316360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.343 [2024-07-24 22:15:28.318856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.343 [2024-07-24 22:15:28.327803] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.343 [2024-07-24 22:15:28.328266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.343 [2024-07-24 22:15:28.328319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.343 [2024-07-24 22:15:28.328352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.343 [2024-07-24 22:15:28.328874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.343 [2024-07-24 22:15:28.329041] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.343 [2024-07-24 22:15:28.329053] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.343 [2024-07-24 22:15:28.329062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.343 [2024-07-24 22:15:28.331569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.343 [2024-07-24 22:15:28.340501] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.343 [2024-07-24 22:15:28.341025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.343 [2024-07-24 22:15:28.341078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.343 [2024-07-24 22:15:28.341109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.343 [2024-07-24 22:15:28.341660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.343 [2024-07-24 22:15:28.341845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.343 [2024-07-24 22:15:28.341857] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.343 [2024-07-24 22:15:28.341866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.343 [2024-07-24 22:15:28.344385] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.343 [2024-07-24 22:15:28.353165] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.343 [2024-07-24 22:15:28.353640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.343 [2024-07-24 22:15:28.353657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.343 [2024-07-24 22:15:28.353666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.343 [2024-07-24 22:15:28.353851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.343 [2024-07-24 22:15:28.354018] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.343 [2024-07-24 22:15:28.354029] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.343 [2024-07-24 22:15:28.354038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.343 [2024-07-24 22:15:28.356542] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.343 [2024-07-24 22:15:28.365895] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.343 [2024-07-24 22:15:28.366407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.343 [2024-07-24 22:15:28.366461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.343 [2024-07-24 22:15:28.366494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.343 [2024-07-24 22:15:28.366986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.343 [2024-07-24 22:15:28.367154] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.343 [2024-07-24 22:15:28.367165] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.343 [2024-07-24 22:15:28.367174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.343 [2024-07-24 22:15:28.369678] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.343 [2024-07-24 22:15:28.378605] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.343 [2024-07-24 22:15:28.379116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.343 [2024-07-24 22:15:28.379170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.343 [2024-07-24 22:15:28.379203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.343 [2024-07-24 22:15:28.379812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.343 [2024-07-24 22:15:28.379980] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.343 [2024-07-24 22:15:28.379991] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.343 [2024-07-24 22:15:28.380001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.343 [2024-07-24 22:15:28.382513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.343 [2024-07-24 22:15:28.391601] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.343 [2024-07-24 22:15:28.392117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.343 [2024-07-24 22:15:28.392136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.343 [2024-07-24 22:15:28.392146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.343 [2024-07-24 22:15:28.392316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.343 [2024-07-24 22:15:28.392489] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.343 [2024-07-24 22:15:28.392499] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.343 [2024-07-24 22:15:28.392509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.343 [2024-07-24 22:15:28.395184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.343 [2024-07-24 22:15:28.404521] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.343 [2024-07-24 22:15:28.404992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.343 [2024-07-24 22:15:28.405011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.343 [2024-07-24 22:15:28.405021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.343 [2024-07-24 22:15:28.405187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.344 [2024-07-24 22:15:28.405353] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.344 [2024-07-24 22:15:28.405365] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.344 [2024-07-24 22:15:28.405377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.344 [2024-07-24 22:15:28.408059] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.344 [2024-07-24 22:15:28.417401] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.344 [2024-07-24 22:15:28.417856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.344 [2024-07-24 22:15:28.417909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.344 [2024-07-24 22:15:28.417942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.344 [2024-07-24 22:15:28.418405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.344 [2024-07-24 22:15:28.418563] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.344 [2024-07-24 22:15:28.418574] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.344 [2024-07-24 22:15:28.418583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.344 [2024-07-24 22:15:28.421189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.344 [2024-07-24 22:15:28.430188] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.344 [2024-07-24 22:15:28.430730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.344 [2024-07-24 22:15:28.430783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.344 [2024-07-24 22:15:28.430815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.344 [2024-07-24 22:15:28.431225] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.344 [2024-07-24 22:15:28.431392] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.344 [2024-07-24 22:15:28.431403] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.344 [2024-07-24 22:15:28.431413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.344 [2024-07-24 22:15:28.433992] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.344 [2024-07-24 22:15:28.442859] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.344 [2024-07-24 22:15:28.443373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.344 [2024-07-24 22:15:28.443426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.344 [2024-07-24 22:15:28.443459] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.344 [2024-07-24 22:15:28.444063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.344 [2024-07-24 22:15:28.444347] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.344 [2024-07-24 22:15:28.444359] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.344 [2024-07-24 22:15:28.444368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.344 [2024-07-24 22:15:28.446861] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.344 [2024-07-24 22:15:28.455585] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.344 [2024-07-24 22:15:28.456080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.344 [2024-07-24 22:15:28.456133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.344 [2024-07-24 22:15:28.456165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.344 [2024-07-24 22:15:28.456544] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.344 [2024-07-24 22:15:28.456702] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.344 [2024-07-24 22:15:28.456713] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.344 [2024-07-24 22:15:28.456728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.344 [2024-07-24 22:15:28.459268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.344 [2024-07-24 22:15:28.468349] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.344 [2024-07-24 22:15:28.468859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.344 [2024-07-24 22:15:28.468912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.344 [2024-07-24 22:15:28.468945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.344 [2024-07-24 22:15:28.469303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.344 [2024-07-24 22:15:28.469461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.344 [2024-07-24 22:15:28.469472] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.344 [2024-07-24 22:15:28.469481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.344 [2024-07-24 22:15:28.472029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.344 [2024-07-24 22:15:28.481097] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.344 [2024-07-24 22:15:28.481585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.344 [2024-07-24 22:15:28.481637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.344 [2024-07-24 22:15:28.481677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.344 [2024-07-24 22:15:28.482115] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.344 [2024-07-24 22:15:28.482281] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.344 [2024-07-24 22:15:28.482293] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.344 [2024-07-24 22:15:28.482302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.344 [2024-07-24 22:15:28.484797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.344 [2024-07-24 22:15:28.493869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.344 [2024-07-24 22:15:28.494374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.344 [2024-07-24 22:15:28.494426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.344 [2024-07-24 22:15:28.494458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.344 [2024-07-24 22:15:28.495063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.344 [2024-07-24 22:15:28.495250] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.344 [2024-07-24 22:15:28.495262] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.344 [2024-07-24 22:15:28.495270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.344 [2024-07-24 22:15:28.497769] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.344 [2024-07-24 22:15:28.506604] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.344 [2024-07-24 22:15:28.507079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.344 [2024-07-24 22:15:28.507142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.344 [2024-07-24 22:15:28.507175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.344 [2024-07-24 22:15:28.507782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.344 [2024-07-24 22:15:28.508360] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.344 [2024-07-24 22:15:28.508372] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.344 [2024-07-24 22:15:28.508381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.344 [2024-07-24 22:15:28.510877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.344 [2024-07-24 22:15:28.519369] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.344 [2024-07-24 22:15:28.519851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.344 [2024-07-24 22:15:28.519869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.344 [2024-07-24 22:15:28.519879] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.344 [2024-07-24 22:15:28.520036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.344 [2024-07-24 22:15:28.520194] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.344 [2024-07-24 22:15:28.520210] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.344 [2024-07-24 22:15:28.520219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.344 [2024-07-24 22:15:28.522765] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.344 [2024-07-24 22:15:28.532037] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.344 [2024-07-24 22:15:28.532552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.344 [2024-07-24 22:15:28.532572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.344 [2024-07-24 22:15:28.532583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.344 [2024-07-24 22:15:28.532769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.344 [2024-07-24 22:15:28.532941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.345 [2024-07-24 22:15:28.532954] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.345 [2024-07-24 22:15:28.532962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.345 [2024-07-24 22:15:28.535684] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.345 [2024-07-24 22:15:28.544922] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.345 [2024-07-24 22:15:28.545374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.345 [2024-07-24 22:15:28.545394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.345 [2024-07-24 22:15:28.545404] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.345 [2024-07-24 22:15:28.545569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.345 [2024-07-24 22:15:28.545744] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.345 [2024-07-24 22:15:28.545756] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.345 [2024-07-24 22:15:28.545765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.345 [2024-07-24 22:15:28.548280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.606 [2024-07-24 22:15:28.557845] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.606 [2024-07-24 22:15:28.558346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.606 [2024-07-24 22:15:28.558398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.606 [2024-07-24 22:15:28.558431] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.606 [2024-07-24 22:15:28.559037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.607 [2024-07-24 22:15:28.559472] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.607 [2024-07-24 22:15:28.559483] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.607 [2024-07-24 22:15:28.559492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.607 [2024-07-24 22:15:28.562090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.607 [2024-07-24 22:15:28.570628] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.607 [2024-07-24 22:15:28.571105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.607 [2024-07-24 22:15:28.571124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.607 [2024-07-24 22:15:28.571133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.607 [2024-07-24 22:15:28.571290] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.607 [2024-07-24 22:15:28.571448] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.607 [2024-07-24 22:15:28.571459] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.607 [2024-07-24 22:15:28.571467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.607 [2024-07-24 22:15:28.574007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.607 [2024-07-24 22:15:28.583285] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.607 [2024-07-24 22:15:28.583802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.607 [2024-07-24 22:15:28.583855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.607 [2024-07-24 22:15:28.583887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.607 [2024-07-24 22:15:28.584382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.607 [2024-07-24 22:15:28.584540] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.607 [2024-07-24 22:15:28.584551] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.607 [2024-07-24 22:15:28.584560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.607 [2024-07-24 22:15:28.587105] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.607 [2024-07-24 22:15:28.596022] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.607 [2024-07-24 22:15:28.596523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.607 [2024-07-24 22:15:28.596576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.607 [2024-07-24 22:15:28.596608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.607 [2024-07-24 22:15:28.597215] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.607 [2024-07-24 22:15:28.597728] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.607 [2024-07-24 22:15:28.597740] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.607 [2024-07-24 22:15:28.597749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.607 [2024-07-24 22:15:28.600220] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.607 [2024-07-24 22:15:28.608770] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.607 [2024-07-24 22:15:28.609261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.607 [2024-07-24 22:15:28.609313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.607 [2024-07-24 22:15:28.609345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.607 [2024-07-24 22:15:28.609667] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.607 [2024-07-24 22:15:28.609852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.607 [2024-07-24 22:15:28.609865] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.607 [2024-07-24 22:15:28.609873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.607 [2024-07-24 22:15:28.612441] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.607 [2024-07-24 22:15:28.621503] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.607 [2024-07-24 22:15:28.621929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.607 [2024-07-24 22:15:28.621981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.607 [2024-07-24 22:15:28.622014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.607 [2024-07-24 22:15:28.622602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.607 [2024-07-24 22:15:28.622839] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.607 [2024-07-24 22:15:28.622850] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.607 [2024-07-24 22:15:28.622860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.607 [2024-07-24 22:15:28.625377] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.607 [2024-07-24 22:15:28.634225] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.607 [2024-07-24 22:15:28.634741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.607 [2024-07-24 22:15:28.634798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.607 [2024-07-24 22:15:28.634830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.607 [2024-07-24 22:15:28.635421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.607 [2024-07-24 22:15:28.635836] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.607 [2024-07-24 22:15:28.635848] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.607 [2024-07-24 22:15:28.635857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.607 [2024-07-24 22:15:28.638377] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.607 [2024-07-24 22:15:28.647018] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.607 [2024-07-24 22:15:28.647526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.607 [2024-07-24 22:15:28.647578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.607 [2024-07-24 22:15:28.647610] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.607 [2024-07-24 22:15:28.648212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.607 [2024-07-24 22:15:28.648819] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.607 [2024-07-24 22:15:28.648832] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.607 [2024-07-24 22:15:28.648845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.607 [2024-07-24 22:15:28.651368] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.607 [2024-07-24 22:15:28.659780] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.607 [2024-07-24 22:15:28.660273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.607 [2024-07-24 22:15:28.660291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.607 [2024-07-24 22:15:28.660300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.607 [2024-07-24 22:15:28.660456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.607 [2024-07-24 22:15:28.660614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.607 [2024-07-24 22:15:28.660624] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.607 [2024-07-24 22:15:28.660633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.607 [2024-07-24 22:15:28.663180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.607 [2024-07-24 22:15:28.672554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.607 [2024-07-24 22:15:28.673049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.607 [2024-07-24 22:15:28.673103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.607 [2024-07-24 22:15:28.673135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.607 [2024-07-24 22:15:28.673554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.607 [2024-07-24 22:15:28.673712] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.607 [2024-07-24 22:15:28.673729] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.607 [2024-07-24 22:15:28.673737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.607 [2024-07-24 22:15:28.676277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.607 [2024-07-24 22:15:28.685195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.607 [2024-07-24 22:15:28.685689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.607 [2024-07-24 22:15:28.685707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.607 [2024-07-24 22:15:28.685721] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.608 [2024-07-24 22:15:28.685901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.608 [2024-07-24 22:15:28.686067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.608 [2024-07-24 22:15:28.686077] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.608 [2024-07-24 22:15:28.686086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.608 [2024-07-24 22:15:28.688594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.608 [2024-07-24 22:15:28.697959] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.608 [2024-07-24 22:15:28.698454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.608 [2024-07-24 22:15:28.698472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.608 [2024-07-24 22:15:28.698481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.608 [2024-07-24 22:15:28.698637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.608 [2024-07-24 22:15:28.698820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.608 [2024-07-24 22:15:28.698832] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.608 [2024-07-24 22:15:28.698841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.608 [2024-07-24 22:15:28.701355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.608 [2024-07-24 22:15:28.710617] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.608 [2024-07-24 22:15:28.711118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.608 [2024-07-24 22:15:28.711136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.608 [2024-07-24 22:15:28.711145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.608 [2024-07-24 22:15:28.711302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.608 [2024-07-24 22:15:28.711460] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.608 [2024-07-24 22:15:28.711471] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.608 [2024-07-24 22:15:28.711479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.608 [2024-07-24 22:15:28.714026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.608 [2024-07-24 22:15:28.723399] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.608 [2024-07-24 22:15:28.723902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.608 [2024-07-24 22:15:28.723956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.608 [2024-07-24 22:15:28.723988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.608 [2024-07-24 22:15:28.724559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.608 [2024-07-24 22:15:28.724721] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.608 [2024-07-24 22:15:28.724733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.608 [2024-07-24 22:15:28.724758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.608 [2024-07-24 22:15:28.727275] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.608 [2024-07-24 22:15:28.736113] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.608 [2024-07-24 22:15:28.736626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.608 [2024-07-24 22:15:28.736678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.608 [2024-07-24 22:15:28.736709] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.608 [2024-07-24 22:15:28.737144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.608 [2024-07-24 22:15:28.737305] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.608 [2024-07-24 22:15:28.737316] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.608 [2024-07-24 22:15:28.737325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.608 [2024-07-24 22:15:28.739777] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.608 [2024-07-24 22:15:28.748764] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.608 [2024-07-24 22:15:28.749261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.608 [2024-07-24 22:15:28.749279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.608 [2024-07-24 22:15:28.749288] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.608 [2024-07-24 22:15:28.749444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.608 [2024-07-24 22:15:28.749600] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.608 [2024-07-24 22:15:28.749609] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.608 [2024-07-24 22:15:28.749618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.608 [2024-07-24 22:15:28.752164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.608 [2024-07-24 22:15:28.761527] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.608 [2024-07-24 22:15:28.762036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.608 [2024-07-24 22:15:28.762089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.608 [2024-07-24 22:15:28.762121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.608 [2024-07-24 22:15:28.762671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.608 [2024-07-24 22:15:28.762855] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.608 [2024-07-24 22:15:28.762867] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.608 [2024-07-24 22:15:28.762876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.608 [2024-07-24 22:15:28.765389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.608 [2024-07-24 22:15:28.774179] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.608 [2024-07-24 22:15:28.774672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.608 [2024-07-24 22:15:28.774691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.608 [2024-07-24 22:15:28.774700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.608 [2024-07-24 22:15:28.774885] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.608 [2024-07-24 22:15:28.775052] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.608 [2024-07-24 22:15:28.775063] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.608 [2024-07-24 22:15:28.775072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.608 [2024-07-24 22:15:28.777585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.608 [2024-07-24 22:15:28.786959] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.608 [2024-07-24 22:15:28.787472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.608 [2024-07-24 22:15:28.787493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.608 [2024-07-24 22:15:28.787504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.608 [2024-07-24 22:15:28.787674] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.608 [2024-07-24 22:15:28.787854] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.608 [2024-07-24 22:15:28.787869] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.608 [2024-07-24 22:15:28.787880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.608 [2024-07-24 22:15:28.790601] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.608 [2024-07-24 22:15:28.799834] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.608 [2024-07-24 22:15:28.800340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.608 [2024-07-24 22:15:28.800359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.608 [2024-07-24 22:15:28.800369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.608 [2024-07-24 22:15:28.800526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.608 [2024-07-24 22:15:28.800683] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.608 [2024-07-24 22:15:28.800694] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.608 [2024-07-24 22:15:28.800704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.608 [2024-07-24 22:15:28.803257] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.608 [2024-07-24 22:15:28.812568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.608 [2024-07-24 22:15:28.813031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.608 [2024-07-24 22:15:28.813049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.608 [2024-07-24 22:15:28.813059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.609 [2024-07-24 22:15:28.813225] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.609 [2024-07-24 22:15:28.813390] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.609 [2024-07-24 22:15:28.813402] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.609 [2024-07-24 22:15:28.813411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.609 [2024-07-24 22:15:28.816038] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.870 [2024-07-24 22:15:28.825352] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.870 [2024-07-24 22:15:28.825870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.870 [2024-07-24 22:15:28.825924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.870 [2024-07-24 22:15:28.825965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.870 [2024-07-24 22:15:28.826558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.870 [2024-07-24 22:15:28.827092] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.870 [2024-07-24 22:15:28.827108] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.871 [2024-07-24 22:15:28.827121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.871 [2024-07-24 22:15:28.830872] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.871 [2024-07-24 22:15:28.838630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.871 [2024-07-24 22:15:28.839064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.871 [2024-07-24 22:15:28.839082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.871 [2024-07-24 22:15:28.839091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.871 [2024-07-24 22:15:28.839248] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.871 [2024-07-24 22:15:28.839406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.871 [2024-07-24 22:15:28.839419] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.871 [2024-07-24 22:15:28.839428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.871 [2024-07-24 22:15:28.841980] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.871 [2024-07-24 22:15:28.851438] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.871 [2024-07-24 22:15:28.851927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.871 [2024-07-24 22:15:28.851947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.871 [2024-07-24 22:15:28.851958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.871 [2024-07-24 22:15:28.852125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.871 [2024-07-24 22:15:28.852292] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.871 [2024-07-24 22:15:28.852304] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.871 [2024-07-24 22:15:28.852313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.871 [2024-07-24 22:15:28.854851] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.871 [2024-07-24 22:15:28.864205] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.871 [2024-07-24 22:15:28.864701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.871 [2024-07-24 22:15:28.864725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.871 [2024-07-24 22:15:28.864735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.871 [2024-07-24 22:15:28.864917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.871 [2024-07-24 22:15:28.865083] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.871 [2024-07-24 22:15:28.865097] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.871 [2024-07-24 22:15:28.865106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.871 [2024-07-24 22:15:28.867652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.871 [2024-07-24 22:15:28.877166] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.871 [2024-07-24 22:15:28.877654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.871 [2024-07-24 22:15:28.877674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.871 [2024-07-24 22:15:28.877684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.871 [2024-07-24 22:15:28.877862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.871 [2024-07-24 22:15:28.878032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.871 [2024-07-24 22:15:28.878044] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.871 [2024-07-24 22:15:28.878053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.871 [2024-07-24 22:15:28.880729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.871 [2024-07-24 22:15:28.889840] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.871 [2024-07-24 22:15:28.890348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.871 [2024-07-24 22:15:28.890368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.871 [2024-07-24 22:15:28.890377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.871 [2024-07-24 22:15:28.890543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.871 [2024-07-24 22:15:28.890709] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.871 [2024-07-24 22:15:28.890727] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.871 [2024-07-24 22:15:28.890736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.871 [2024-07-24 22:15:28.893211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.871 [2024-07-24 22:15:28.902610] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.871 [2024-07-24 22:15:28.903026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.871 [2024-07-24 22:15:28.903044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.871 [2024-07-24 22:15:28.903053] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.871 [2024-07-24 22:15:28.903210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.871 [2024-07-24 22:15:28.903367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.871 [2024-07-24 22:15:28.903378] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.871 [2024-07-24 22:15:28.903386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.871 [2024-07-24 22:15:28.905936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.871 [2024-07-24 22:15:28.915403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.871 [2024-07-24 22:15:28.915827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.871 [2024-07-24 22:15:28.915846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.871 [2024-07-24 22:15:28.915856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.871 [2024-07-24 22:15:28.916021] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.871 [2024-07-24 22:15:28.916186] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.871 [2024-07-24 22:15:28.916198] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.871 [2024-07-24 22:15:28.916206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.871 [2024-07-24 22:15:28.918711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.871 [2024-07-24 22:15:28.928094] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.871 [2024-07-24 22:15:28.928558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.871 [2024-07-24 22:15:28.928576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.871 [2024-07-24 22:15:28.928586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.871 [2024-07-24 22:15:28.928764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.871 [2024-07-24 22:15:28.928930] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.871 [2024-07-24 22:15:28.928941] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.871 [2024-07-24 22:15:28.928950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.871 [2024-07-24 22:15:28.931461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.871 [2024-07-24 22:15:28.940900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.871 [2024-07-24 22:15:28.941344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.871 [2024-07-24 22:15:28.941396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.871 [2024-07-24 22:15:28.941429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.871 [2024-07-24 22:15:28.942039] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.871 [2024-07-24 22:15:28.942582] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.871 [2024-07-24 22:15:28.942593] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.871 [2024-07-24 22:15:28.942602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.871 [2024-07-24 22:15:28.945148] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.871 [2024-07-24 22:15:28.953785] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.871 [2024-07-24 22:15:28.954261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.871 [2024-07-24 22:15:28.954280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.871 [2024-07-24 22:15:28.954289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.871 [2024-07-24 22:15:28.954450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.871 [2024-07-24 22:15:28.954608] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.872 [2024-07-24 22:15:28.954618] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.872 [2024-07-24 22:15:28.954627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.872 [2024-07-24 22:15:28.957183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.872 [2024-07-24 22:15:28.966554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.872 [2024-07-24 22:15:28.966998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.872 [2024-07-24 22:15:28.967016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.872 [2024-07-24 22:15:28.967025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.872 [2024-07-24 22:15:28.967182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.872 [2024-07-24 22:15:28.967340] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.872 [2024-07-24 22:15:28.967351] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.872 [2024-07-24 22:15:28.967359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.872 [2024-07-24 22:15:28.969850] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2845863 Killed "${NVMF_APP[@]}" "$@" 00:27:49.872 22:15:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:27:49.872 22:15:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:49.872 22:15:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:49.872 22:15:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:49.872 22:15:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:49.872 [2024-07-24 22:15:28.979457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.872 [2024-07-24 22:15:28.979967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.872 [2024-07-24 22:15:28.979987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.872 [2024-07-24 22:15:28.979997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.872 [2024-07-24 22:15:28.980168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.872 [2024-07-24 22:15:28.980338] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.872 [2024-07-24 22:15:28.980349] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.872 [2024-07-24 22:15:28.980358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.872 [2024-07-24 22:15:28.983032] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.872 22:15:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2847278 00:27:49.872 22:15:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2847278 00:27:49.872 22:15:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:49.872 22:15:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 2847278 ']' 00:27:49.872 22:15:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:49.872 22:15:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:49.872 22:15:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:49.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:49.872 22:15:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:49.872 22:15:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:49.872 [2024-07-24 22:15:28.992329] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.872 [2024-07-24 22:15:28.992822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.872 [2024-07-24 22:15:28.992840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.872 [2024-07-24 22:15:28.992851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.872 [2024-07-24 22:15:28.993020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.872 [2024-07-24 22:15:28.993191] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.872 [2024-07-24 22:15:28.993201] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.872 [2024-07-24 22:15:28.993210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.872 [2024-07-24 22:15:28.995886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.872 [2024-07-24 22:15:29.005348] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.872 [2024-07-24 22:15:29.005864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.872 [2024-07-24 22:15:29.005883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.872 [2024-07-24 22:15:29.005894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.872 [2024-07-24 22:15:29.006065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.872 [2024-07-24 22:15:29.006236] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.872 [2024-07-24 22:15:29.006247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.872 [2024-07-24 22:15:29.006256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.872 [2024-07-24 22:15:29.008928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.872 [2024-07-24 22:15:29.018215] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.872 [2024-07-24 22:15:29.018751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.872 [2024-07-24 22:15:29.018770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.872 [2024-07-24 22:15:29.018781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.872 [2024-07-24 22:15:29.018952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.872 [2024-07-24 22:15:29.019123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.872 [2024-07-24 22:15:29.019137] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.872 [2024-07-24 22:15:29.019148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.872 [2024-07-24 22:15:29.021827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.872 [2024-07-24 22:15:29.031071] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.872 [2024-07-24 22:15:29.031562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.872 [2024-07-24 22:15:29.031580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.872 [2024-07-24 22:15:29.031590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.872 [2024-07-24 22:15:29.031767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.872 [2024-07-24 22:15:29.031947] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.872 [2024-07-24 22:15:29.031958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.872 [2024-07-24 22:15:29.031967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.872 [2024-07-24 22:15:29.034275] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:27:49.872 [2024-07-24 22:15:29.034320] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:49.872 [2024-07-24 22:15:29.034565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.872 [2024-07-24 22:15:29.043971] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.872 [2024-07-24 22:15:29.044509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.872 [2024-07-24 22:15:29.044530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.872 [2024-07-24 22:15:29.044542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.872 [2024-07-24 22:15:29.044725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.872 [2024-07-24 22:15:29.044901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.872 [2024-07-24 22:15:29.044914] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.872 [2024-07-24 22:15:29.044924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.872 [2024-07-24 22:15:29.047650] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.872 [2024-07-24 22:15:29.056880] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.872 [2024-07-24 22:15:29.057312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.872 [2024-07-24 22:15:29.057331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.873 [2024-07-24 22:15:29.057342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.873 [2024-07-24 22:15:29.057509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.873 [2024-07-24 22:15:29.057675] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.873 [2024-07-24 22:15:29.057686] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.873 [2024-07-24 22:15:29.057699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.873 [2024-07-24 22:15:29.060365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.873 [2024-07-24 22:15:29.069848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.873 [2024-07-24 22:15:29.070362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.873 [2024-07-24 22:15:29.070381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:49.873 [2024-07-24 22:15:29.070391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:49.873 [2024-07-24 22:15:29.070557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:49.873 [2024-07-24 22:15:29.070728] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.873 [2024-07-24 22:15:29.070740] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.873 [2024-07-24 22:15:29.070749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.873 EAL: No free 2048 kB hugepages reported on node 1 00:27:49.873 [2024-07-24 22:15:29.073409] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.134 [2024-07-24 22:15:29.082724] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.134 [2024-07-24 22:15:29.083169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.134 [2024-07-24 22:15:29.083188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.134 [2024-07-24 22:15:29.083199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.134 [2024-07-24 22:15:29.083370] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.134 [2024-07-24 22:15:29.083540] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.134 [2024-07-24 22:15:29.083552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.134 [2024-07-24 22:15:29.083561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.134 [2024-07-24 22:15:29.086232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.134 [2024-07-24 22:15:29.095676] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.134 [2024-07-24 22:15:29.096116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.134 [2024-07-24 22:15:29.096135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.134 [2024-07-24 22:15:29.096146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.134 [2024-07-24 22:15:29.096312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.134 [2024-07-24 22:15:29.096477] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.134 [2024-07-24 22:15:29.096488] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.134 [2024-07-24 22:15:29.096497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.134 [2024-07-24 22:15:29.099148] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.134 [2024-07-24 22:15:29.108623] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.134 [2024-07-24 22:15:29.109046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.134 [2024-07-24 22:15:29.109065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.134 [2024-07-24 22:15:29.109075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.134 [2024-07-24 22:15:29.109241] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.134 [2024-07-24 22:15:29.109406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.134 [2024-07-24 22:15:29.109417] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.134 [2024-07-24 22:15:29.109427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.134 [2024-07-24 22:15:29.109606] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:50.134 [2024-07-24 22:15:29.112117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.134 [2024-07-24 22:15:29.121601] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.134 [2024-07-24 22:15:29.122103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.134 [2024-07-24 22:15:29.122123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.134 [2024-07-24 22:15:29.122133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.134 [2024-07-24 22:15:29.122299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.134 [2024-07-24 22:15:29.122466] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.134 [2024-07-24 22:15:29.122478] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.134 [2024-07-24 22:15:29.122486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.134 [2024-07-24 22:15:29.125145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.134 [2024-07-24 22:15:29.134448] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.134 [2024-07-24 22:15:29.134962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.134 [2024-07-24 22:15:29.134981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.134 [2024-07-24 22:15:29.134992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.134 [2024-07-24 22:15:29.135161] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.134 [2024-07-24 22:15:29.135332] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.134 [2024-07-24 22:15:29.135346] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.134 [2024-07-24 22:15:29.135356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.134 [2024-07-24 22:15:29.137983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.134 [2024-07-24 22:15:29.147315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.134 [2024-07-24 22:15:29.147833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.134 [2024-07-24 22:15:29.147853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.134 [2024-07-24 22:15:29.147863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.134 [2024-07-24 22:15:29.148034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.134 [2024-07-24 22:15:29.148200] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.134 [2024-07-24 22:15:29.148212] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.134 [2024-07-24 22:15:29.148221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.134 [2024-07-24 22:15:29.150840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.135 [2024-07-24 22:15:29.160239] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.135 [2024-07-24 22:15:29.160746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.135 [2024-07-24 22:15:29.160766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.135 [2024-07-24 22:15:29.160777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.135 [2024-07-24 22:15:29.160947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.135 [2024-07-24 22:15:29.161118] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.135 [2024-07-24 22:15:29.161130] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.135 [2024-07-24 22:15:29.161139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.135 [2024-07-24 22:15:29.163817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.135 [2024-07-24 22:15:29.173126] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.135 [2024-07-24 22:15:29.173581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.135 [2024-07-24 22:15:29.173600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.135 [2024-07-24 22:15:29.173610] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.135 [2024-07-24 22:15:29.173784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.135 [2024-07-24 22:15:29.173956] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.135 [2024-07-24 22:15:29.173967] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.135 [2024-07-24 22:15:29.173976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.135 [2024-07-24 22:15:29.176641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.135 [2024-07-24 22:15:29.183917] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:50.135 [2024-07-24 22:15:29.183945] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:50.135 [2024-07-24 22:15:29.183954] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:50.135 [2024-07-24 22:15:29.183963] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:50.135 [2024-07-24 22:15:29.183970] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:50.135 [2024-07-24 22:15:29.184012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:50.135 [2024-07-24 22:15:29.184100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:50.135 [2024-07-24 22:15:29.184103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:50.135 [2024-07-24 22:15:29.186107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.135 [2024-07-24 22:15:29.186557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.135 [2024-07-24 22:15:29.186577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.135 [2024-07-24 22:15:29.186588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.135 [2024-07-24 22:15:29.186766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.135 [2024-07-24 22:15:29.186936] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.135 [2024-07-24 22:15:29.186948] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.135 [2024-07-24 22:15:29.186957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.135 [2024-07-24 22:15:29.189627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.135 [2024-07-24 22:15:29.199095] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.135 [2024-07-24 22:15:29.199601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.135 [2024-07-24 22:15:29.199621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.135 [2024-07-24 22:15:29.199632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.135 [2024-07-24 22:15:29.199808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.135 [2024-07-24 22:15:29.199979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.135 [2024-07-24 22:15:29.199991] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.135 [2024-07-24 22:15:29.200001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.135 [2024-07-24 22:15:29.202671] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.135 [2024-07-24 22:15:29.211975] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.135 [2024-07-24 22:15:29.212493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.135 [2024-07-24 22:15:29.212514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.135 [2024-07-24 22:15:29.212525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.135 [2024-07-24 22:15:29.212698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.135 [2024-07-24 22:15:29.212874] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.135 [2024-07-24 22:15:29.212886] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.135 [2024-07-24 22:15:29.212897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.135 [2024-07-24 22:15:29.215559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.135 [2024-07-24 22:15:29.224870] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.135 [2024-07-24 22:15:29.225344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.135 [2024-07-24 22:15:29.225365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.135 [2024-07-24 22:15:29.225375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.135 [2024-07-24 22:15:29.225552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.135 [2024-07-24 22:15:29.225730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.135 [2024-07-24 22:15:29.225742] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.135 [2024-07-24 22:15:29.225753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.135 [2024-07-24 22:15:29.228417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.135 [2024-07-24 22:15:29.237894] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.135 [2024-07-24 22:15:29.238398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.135 [2024-07-24 22:15:29.238419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.135 [2024-07-24 22:15:29.238430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.135 [2024-07-24 22:15:29.238602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.135 [2024-07-24 22:15:29.238780] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.135 [2024-07-24 22:15:29.238793] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.135 [2024-07-24 22:15:29.238803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.135 [2024-07-24 22:15:29.241470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.135 [2024-07-24 22:15:29.250774] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.135 [2024-07-24 22:15:29.251226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.135 [2024-07-24 22:15:29.251245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.135 [2024-07-24 22:15:29.251255] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.135 [2024-07-24 22:15:29.251426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.135 [2024-07-24 22:15:29.251612] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.135 [2024-07-24 22:15:29.251624] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.135 [2024-07-24 22:15:29.251633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.135 [2024-07-24 22:15:29.254306] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.135 [2024-07-24 22:15:29.263772] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.135 [2024-07-24 22:15:29.264248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.135 [2024-07-24 22:15:29.264268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.135 [2024-07-24 22:15:29.264279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.135 [2024-07-24 22:15:29.264449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.135 [2024-07-24 22:15:29.264619] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.135 [2024-07-24 22:15:29.264631] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.135 [2024-07-24 22:15:29.264645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.135 [2024-07-24 22:15:29.267318] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.135 [2024-07-24 22:15:29.276790] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.135 [2024-07-24 22:15:29.277177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.135 [2024-07-24 22:15:29.277196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.136 [2024-07-24 22:15:29.277206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.136 [2024-07-24 22:15:29.277376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.136 [2024-07-24 22:15:29.277546] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.136 [2024-07-24 22:15:29.277558] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.136 [2024-07-24 22:15:29.277568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.136 [2024-07-24 22:15:29.280243] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.136 [2024-07-24 22:15:29.289698] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.136 [2024-07-24 22:15:29.290126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.136 [2024-07-24 22:15:29.290146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.136 [2024-07-24 22:15:29.290156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.136 [2024-07-24 22:15:29.290328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.136 [2024-07-24 22:15:29.290498] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.136 [2024-07-24 22:15:29.290510] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.136 [2024-07-24 22:15:29.290520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.136 [2024-07-24 22:15:29.293193] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.136 [2024-07-24 22:15:29.302679] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.136 [2024-07-24 22:15:29.303143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.136 [2024-07-24 22:15:29.303163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.136 [2024-07-24 22:15:29.303173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.136 [2024-07-24 22:15:29.303351] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.136 [2024-07-24 22:15:29.303532] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.136 [2024-07-24 22:15:29.303547] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.136 [2024-07-24 22:15:29.303559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.136 [2024-07-24 22:15:29.306279] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.136 [2024-07-24 22:15:29.315670] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.136 [2024-07-24 22:15:29.316049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.136 [2024-07-24 22:15:29.316073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.136 [2024-07-24 22:15:29.316084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.136 [2024-07-24 22:15:29.316255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.136 [2024-07-24 22:15:29.316426] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.136 [2024-07-24 22:15:29.316439] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.136 [2024-07-24 22:15:29.316448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.136 [2024-07-24 22:15:29.319122] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.136 [2024-07-24 22:15:29.328576] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.136 [2024-07-24 22:15:29.329096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.136 [2024-07-24 22:15:29.329116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.136 [2024-07-24 22:15:29.329126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.136 [2024-07-24 22:15:29.329296] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.136 [2024-07-24 22:15:29.329467] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.136 [2024-07-24 22:15:29.329479] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.136 [2024-07-24 22:15:29.329488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.136 [2024-07-24 22:15:29.332164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.136 [2024-07-24 22:15:29.341470] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.136 [2024-07-24 22:15:29.341899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.136 [2024-07-24 22:15:29.341919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.136 [2024-07-24 22:15:29.341930] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.136 [2024-07-24 22:15:29.342101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.136 [2024-07-24 22:15:29.342272] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.136 [2024-07-24 22:15:29.342283] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.136 [2024-07-24 22:15:29.342292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.136 [2024-07-24 22:15:29.344967] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.397 [2024-07-24 22:15:29.354422] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.397 [2024-07-24 22:15:29.354734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.397 [2024-07-24 22:15:29.354754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.397 [2024-07-24 22:15:29.354764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.397 [2024-07-24 22:15:29.354935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.397 [2024-07-24 22:15:29.355109] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.397 [2024-07-24 22:15:29.355120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.397 [2024-07-24 22:15:29.355129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.397 [2024-07-24 22:15:29.357808] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.397 [2024-07-24 22:15:29.367428] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.397 [2024-07-24 22:15:29.367853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.397 [2024-07-24 22:15:29.367873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.397 [2024-07-24 22:15:29.367883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.397 [2024-07-24 22:15:29.368054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.397 [2024-07-24 22:15:29.368225] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.397 [2024-07-24 22:15:29.368236] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.397 [2024-07-24 22:15:29.368245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.397 [2024-07-24 22:15:29.370927] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.397 [2024-07-24 22:15:29.380369] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.397 [2024-07-24 22:15:29.380823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.397 [2024-07-24 22:15:29.380843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.397 [2024-07-24 22:15:29.380853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.398 [2024-07-24 22:15:29.381024] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.398 [2024-07-24 22:15:29.381194] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.398 [2024-07-24 22:15:29.381206] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.398 [2024-07-24 22:15:29.381215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.398 [2024-07-24 22:15:29.383890] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.398 [2024-07-24 22:15:29.393342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.398 [2024-07-24 22:15:29.393767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.398 [2024-07-24 22:15:29.393786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.398 [2024-07-24 22:15:29.393797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.398 [2024-07-24 22:15:29.393975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.398 [2024-07-24 22:15:29.394140] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.398 [2024-07-24 22:15:29.394152] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.398 [2024-07-24 22:15:29.394161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.398 [2024-07-24 22:15:29.396836] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.398 [2024-07-24 22:15:29.406260] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.398 [2024-07-24 22:15:29.406758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.398 [2024-07-24 22:15:29.406778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.398 [2024-07-24 22:15:29.406789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.398 [2024-07-24 22:15:29.406962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.398 [2024-07-24 22:15:29.407133] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.398 [2024-07-24 22:15:29.407145] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.398 [2024-07-24 22:15:29.407155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.398 [2024-07-24 22:15:29.409828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.398 [2024-07-24 22:15:29.419284] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.398 [2024-07-24 22:15:29.419702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.398 [2024-07-24 22:15:29.419726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.398 [2024-07-24 22:15:29.419737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.398 [2024-07-24 22:15:29.419908] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.398 [2024-07-24 22:15:29.420079] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.398 [2024-07-24 22:15:29.420091] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.398 [2024-07-24 22:15:29.420104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.398 [2024-07-24 22:15:29.422780] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.398 [2024-07-24 22:15:29.432225] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.398 [2024-07-24 22:15:29.432712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.398 [2024-07-24 22:15:29.432736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.398 [2024-07-24 22:15:29.432746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.398 [2024-07-24 22:15:29.432917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.398 [2024-07-24 22:15:29.433087] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.398 [2024-07-24 22:15:29.433098] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.398 [2024-07-24 22:15:29.433107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.398 [2024-07-24 22:15:29.435779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.398 [2024-07-24 22:15:29.445221] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.398 [2024-07-24 22:15:29.445634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.398 [2024-07-24 22:15:29.445653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.398 [2024-07-24 22:15:29.445666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.398 [2024-07-24 22:15:29.445842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.398 [2024-07-24 22:15:29.446013] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.398 [2024-07-24 22:15:29.446024] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.398 [2024-07-24 22:15:29.446033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.398 [2024-07-24 22:15:29.448700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.398 [2024-07-24 22:15:29.458154] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.398 [2024-07-24 22:15:29.458598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.398 [2024-07-24 22:15:29.458618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.398 [2024-07-24 22:15:29.458628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.398 [2024-07-24 22:15:29.458803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.398 [2024-07-24 22:15:29.458974] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.398 [2024-07-24 22:15:29.458986] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.398 [2024-07-24 22:15:29.458995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.398 [2024-07-24 22:15:29.461663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.398 [2024-07-24 22:15:29.471118] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.398 [2024-07-24 22:15:29.471611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.398 [2024-07-24 22:15:29.471629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.398 [2024-07-24 22:15:29.471640] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.398 [2024-07-24 22:15:29.471815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.398 [2024-07-24 22:15:29.471986] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.398 [2024-07-24 22:15:29.471998] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.398 [2024-07-24 22:15:29.472007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.398 [2024-07-24 22:15:29.474667] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.398 [2024-07-24 22:15:29.484116] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.398 [2024-07-24 22:15:29.484630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.398 [2024-07-24 22:15:29.484648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.398 [2024-07-24 22:15:29.484658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.398 [2024-07-24 22:15:29.484833] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.398 [2024-07-24 22:15:29.485004] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.398 [2024-07-24 22:15:29.485024] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.398 [2024-07-24 22:15:29.485035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.398 [2024-07-24 22:15:29.487694] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.398 [2024-07-24 22:15:29.497136] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.398 [2024-07-24 22:15:29.497629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.398 [2024-07-24 22:15:29.497648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.398 [2024-07-24 22:15:29.497658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.398 [2024-07-24 22:15:29.497834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.398 [2024-07-24 22:15:29.498005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.399 [2024-07-24 22:15:29.498016] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.399 [2024-07-24 22:15:29.498025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.399 [2024-07-24 22:15:29.500691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.399 [2024-07-24 22:15:29.510122] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.399 [2024-07-24 22:15:29.510632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.399 [2024-07-24 22:15:29.510651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.399 [2024-07-24 22:15:29.510661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.399 [2024-07-24 22:15:29.510837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.399 [2024-07-24 22:15:29.511009] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.399 [2024-07-24 22:15:29.511020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.399 [2024-07-24 22:15:29.511029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.399 [2024-07-24 22:15:29.513690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.399 [2024-07-24 22:15:29.523141] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.399 [2024-07-24 22:15:29.523634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.399 [2024-07-24 22:15:29.523654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.399 [2024-07-24 22:15:29.523664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.399 [2024-07-24 22:15:29.523839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.399 [2024-07-24 22:15:29.524010] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.399 [2024-07-24 22:15:29.524022] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.399 [2024-07-24 22:15:29.524032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.399 [2024-07-24 22:15:29.526698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.399 [2024-07-24 22:15:29.536137] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.399 [2024-07-24 22:15:29.536624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.399 [2024-07-24 22:15:29.536642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.399 [2024-07-24 22:15:29.536652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.399 [2024-07-24 22:15:29.536827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.399 [2024-07-24 22:15:29.536998] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.399 [2024-07-24 22:15:29.537009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.399 [2024-07-24 22:15:29.537018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.399 [2024-07-24 22:15:29.539684] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.399 [2024-07-24 22:15:29.549129] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.399 [2024-07-24 22:15:29.549619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.399 [2024-07-24 22:15:29.549638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.399 [2024-07-24 22:15:29.549648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.399 [2024-07-24 22:15:29.549823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.399 [2024-07-24 22:15:29.549994] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.399 [2024-07-24 22:15:29.550005] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.399 [2024-07-24 22:15:29.550014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.399 [2024-07-24 22:15:29.552681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.399 [2024-07-24 22:15:29.562128] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.399 [2024-07-24 22:15:29.562654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.399 [2024-07-24 22:15:29.562675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.399 [2024-07-24 22:15:29.562688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.399 [2024-07-24 22:15:29.562873] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.399 [2024-07-24 22:15:29.563052] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.399 [2024-07-24 22:15:29.563067] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.399 [2024-07-24 22:15:29.563078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.399 [2024-07-24 22:15:29.565807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.399 [2024-07-24 22:15:29.575033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.399 [2024-07-24 22:15:29.575547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.399 [2024-07-24 22:15:29.575566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.399 [2024-07-24 22:15:29.575576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.399 [2024-07-24 22:15:29.575758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.399 [2024-07-24 22:15:29.575930] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.399 [2024-07-24 22:15:29.575941] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.399 [2024-07-24 22:15:29.575950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.399 [2024-07-24 22:15:29.578621] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.399 [2024-07-24 22:15:29.587918] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.399 [2024-07-24 22:15:29.588433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.399 [2024-07-24 22:15:29.588452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.399 [2024-07-24 22:15:29.588462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.399 [2024-07-24 22:15:29.588633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.399 [2024-07-24 22:15:29.588808] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.399 [2024-07-24 22:15:29.588820] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.399 [2024-07-24 22:15:29.588829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.399 [2024-07-24 22:15:29.591496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.399 [2024-07-24 22:15:29.600789] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.399 [2024-07-24 22:15:29.601251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.399 [2024-07-24 22:15:29.601269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.399 [2024-07-24 22:15:29.601279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.399 [2024-07-24 22:15:29.601449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.399 [2024-07-24 22:15:29.601620] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.399 [2024-07-24 22:15:29.601631] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.399 [2024-07-24 22:15:29.601641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.399 [2024-07-24 22:15:29.604313] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.660 [2024-07-24 22:15:29.613765] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.660 [2024-07-24 22:15:29.614279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.660 [2024-07-24 22:15:29.614297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.660 [2024-07-24 22:15:29.614308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.660 [2024-07-24 22:15:29.614478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.660 [2024-07-24 22:15:29.614647] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.660 [2024-07-24 22:15:29.614659] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.660 [2024-07-24 22:15:29.614671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.660 [2024-07-24 22:15:29.617341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.660 [2024-07-24 22:15:29.626641] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.660 [2024-07-24 22:15:29.627155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.660 [2024-07-24 22:15:29.627174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.660 [2024-07-24 22:15:29.627184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.660 [2024-07-24 22:15:29.627354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.660 [2024-07-24 22:15:29.627525] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.660 [2024-07-24 22:15:29.627536] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.660 [2024-07-24 22:15:29.627545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.660 [2024-07-24 22:15:29.630214] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.660 [2024-07-24 22:15:29.639662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.660 [2024-07-24 22:15:29.640177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.660 [2024-07-24 22:15:29.640196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.660 [2024-07-24 22:15:29.640206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.660 [2024-07-24 22:15:29.640377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.660 [2024-07-24 22:15:29.640547] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.660 [2024-07-24 22:15:29.640558] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.660 [2024-07-24 22:15:29.640568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.660 [2024-07-24 22:15:29.643237] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.661 [2024-07-24 22:15:29.652590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.661 [2024-07-24 22:15:29.653007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.661 [2024-07-24 22:15:29.653027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.661 [2024-07-24 22:15:29.653038] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.661 [2024-07-24 22:15:29.653209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.661 [2024-07-24 22:15:29.653381] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.661 [2024-07-24 22:15:29.653392] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.661 [2024-07-24 22:15:29.653402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.661 [2024-07-24 22:15:29.656066] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.661 [2024-07-24 22:15:29.665511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.661 [2024-07-24 22:15:29.666006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.661 [2024-07-24 22:15:29.666029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.661 [2024-07-24 22:15:29.666040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.661 [2024-07-24 22:15:29.666210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.661 [2024-07-24 22:15:29.666381] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.661 [2024-07-24 22:15:29.666392] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.661 [2024-07-24 22:15:29.666401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.661 [2024-07-24 22:15:29.669072] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.661 [2024-07-24 22:15:29.678529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.661 [2024-07-24 22:15:29.679049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.661 [2024-07-24 22:15:29.679068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.661 [2024-07-24 22:15:29.679079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.661 [2024-07-24 22:15:29.679250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.661 [2024-07-24 22:15:29.679421] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.661 [2024-07-24 22:15:29.679432] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.661 [2024-07-24 22:15:29.679442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.661 [2024-07-24 22:15:29.682111] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.661 [2024-07-24 22:15:29.691551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.661 [2024-07-24 22:15:29.692045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.661 [2024-07-24 22:15:29.692064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.661 [2024-07-24 22:15:29.692074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.661 [2024-07-24 22:15:29.692244] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.661 [2024-07-24 22:15:29.692415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.661 [2024-07-24 22:15:29.692426] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.661 [2024-07-24 22:15:29.692435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.661 [2024-07-24 22:15:29.695109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.661 [2024-07-24 22:15:29.704555] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.661 [2024-07-24 22:15:29.704972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.661 [2024-07-24 22:15:29.704991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.661 [2024-07-24 22:15:29.705001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.661 [2024-07-24 22:15:29.705172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.661 [2024-07-24 22:15:29.705344] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.661 [2024-07-24 22:15:29.705356] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.661 [2024-07-24 22:15:29.705365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.661 [2024-07-24 22:15:29.708035] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.661 [2024-07-24 22:15:29.717480] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.661 [2024-07-24 22:15:29.717930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.661 [2024-07-24 22:15:29.717949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.661 [2024-07-24 22:15:29.717959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.661 [2024-07-24 22:15:29.718130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.661 [2024-07-24 22:15:29.718300] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.661 [2024-07-24 22:15:29.718311] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.661 [2024-07-24 22:15:29.718321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.661 [2024-07-24 22:15:29.720995] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.661 [2024-07-24 22:15:29.730439] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.661 [2024-07-24 22:15:29.730934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.661 [2024-07-24 22:15:29.730953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.661 [2024-07-24 22:15:29.730963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.661 [2024-07-24 22:15:29.731133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.661 [2024-07-24 22:15:29.731304] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.661 [2024-07-24 22:15:29.731316] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.661 [2024-07-24 22:15:29.731325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.661 [2024-07-24 22:15:29.733994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.661 [2024-07-24 22:15:29.743438] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.661 [2024-07-24 22:15:29.743831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.661 [2024-07-24 22:15:29.743850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.661 [2024-07-24 22:15:29.743860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.661 [2024-07-24 22:15:29.744032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.661 [2024-07-24 22:15:29.744203] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.661 [2024-07-24 22:15:29.744215] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.661 [2024-07-24 22:15:29.744224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.661 [2024-07-24 22:15:29.746896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.661 [2024-07-24 22:15:29.756322] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.661 [2024-07-24 22:15:29.756818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.661 [2024-07-24 22:15:29.756838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.661 [2024-07-24 22:15:29.756848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.661 [2024-07-24 22:15:29.757019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.661 [2024-07-24 22:15:29.757189] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.662 [2024-07-24 22:15:29.757200] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.662 [2024-07-24 22:15:29.757210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.662 [2024-07-24 22:15:29.759879] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.662 [2024-07-24 22:15:29.769316] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.662 [2024-07-24 22:15:29.769726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.662 [2024-07-24 22:15:29.769745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.662 [2024-07-24 22:15:29.769755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.662 [2024-07-24 22:15:29.769928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.662 [2024-07-24 22:15:29.770098] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.662 [2024-07-24 22:15:29.770110] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.662 [2024-07-24 22:15:29.770119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.662 [2024-07-24 22:15:29.772797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.662 [2024-07-24 22:15:29.782251] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.662 [2024-07-24 22:15:29.782770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.662 [2024-07-24 22:15:29.782790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.662 [2024-07-24 22:15:29.782800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.662 [2024-07-24 22:15:29.782972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.662 [2024-07-24 22:15:29.783143] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.662 [2024-07-24 22:15:29.783155] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.662 [2024-07-24 22:15:29.783163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.662 [2024-07-24 22:15:29.785838] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.662 [2024-07-24 22:15:29.795128] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.662 [2024-07-24 22:15:29.795644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.662 [2024-07-24 22:15:29.795664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.662 [2024-07-24 22:15:29.795677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.662 [2024-07-24 22:15:29.795852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.662 [2024-07-24 22:15:29.796024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.662 [2024-07-24 22:15:29.796036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.662 [2024-07-24 22:15:29.796045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.662 [2024-07-24 22:15:29.798712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.662 [2024-07-24 22:15:29.808146] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.662 [2024-07-24 22:15:29.808668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.662 [2024-07-24 22:15:29.808688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.662 [2024-07-24 22:15:29.808698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.662 [2024-07-24 22:15:29.808874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.662 [2024-07-24 22:15:29.809046] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.662 [2024-07-24 22:15:29.809058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.662 [2024-07-24 22:15:29.809069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.662 [2024-07-24 22:15:29.811739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.662 [2024-07-24 22:15:29.821026] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.662 [2024-07-24 22:15:29.821568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.662 [2024-07-24 22:15:29.821589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.662 [2024-07-24 22:15:29.821602] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.662 [2024-07-24 22:15:29.821783] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.662 [2024-07-24 22:15:29.821959] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.662 [2024-07-24 22:15:29.821975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.662 [2024-07-24 22:15:29.821986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.662 [2024-07-24 22:15:29.824703] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.662 [2024-07-24 22:15:29.833949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.662 [2024-07-24 22:15:29.834468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.662 [2024-07-24 22:15:29.834487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.662 [2024-07-24 22:15:29.834497] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.662 [2024-07-24 22:15:29.834669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.662 [2024-07-24 22:15:29.834845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.662 [2024-07-24 22:15:29.834863] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.662 [2024-07-24 22:15:29.834873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.662 [2024-07-24 22:15:29.837544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.662 22:15:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:50.662 22:15:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:27:50.662 22:15:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:50.662 22:15:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:50.662 22:15:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:50.662 [2024-07-24 22:15:29.846853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.662 [2024-07-24 22:15:29.847350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.662 [2024-07-24 22:15:29.847371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.662 [2024-07-24 22:15:29.847381] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.662 [2024-07-24 22:15:29.847552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.662 [2024-07-24 22:15:29.847730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.662 [2024-07-24 22:15:29.847742] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.662 [2024-07-24 22:15:29.847751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.662 [2024-07-24 22:15:29.850423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.662 [2024-07-24 22:15:29.859728] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.662 [2024-07-24 22:15:29.860141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.662 [2024-07-24 22:15:29.860160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.662 [2024-07-24 22:15:29.860171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.662 [2024-07-24 22:15:29.860341] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.662 [2024-07-24 22:15:29.860512] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.662 [2024-07-24 22:15:29.860523] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.662 [2024-07-24 22:15:29.860532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.662 [2024-07-24 22:15:29.863204] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.922 [2024-07-24 22:15:29.872664] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.922 [2024-07-24 22:15:29.873158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.922 [2024-07-24 22:15:29.873178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.922 [2024-07-24 22:15:29.873189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.922 [2024-07-24 22:15:29.873359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.922 [2024-07-24 22:15:29.873530] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.922 [2024-07-24 22:15:29.873545] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.922 [2024-07-24 22:15:29.873555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.922 [2024-07-24 22:15:29.876228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.922 22:15:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:50.922 22:15:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:50.922 22:15:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.922 22:15:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:50.922 [2024-07-24 22:15:29.885556] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.922 [2024-07-24 22:15:29.886009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.922 [2024-07-24 22:15:29.886029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.922 [2024-07-24 22:15:29.886039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.922 [2024-07-24 22:15:29.886209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.922 [2024-07-24 22:15:29.886379] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.922 [2024-07-24 22:15:29.886391] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.922 [2024-07-24 22:15:29.886400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.922 [2024-07-24 22:15:29.887662] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:50.922 [2024-07-24 22:15:29.889067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.922 [2024-07-24 22:15:29.898505] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.922 [2024-07-24 22:15:29.898997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.922 [2024-07-24 22:15:29.899017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.922 [2024-07-24 22:15:29.899027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.922 [2024-07-24 22:15:29.899197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.922 [2024-07-24 22:15:29.899368] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.922 [2024-07-24 22:15:29.899379] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.922 [2024-07-24 22:15:29.899389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.922 [2024-07-24 22:15:29.902055] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.922 22:15:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.922 22:15:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:50.923 22:15:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.923 22:15:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:50.923 [2024-07-24 22:15:29.911490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.923 [2024-07-24 22:15:29.911797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.923 [2024-07-24 22:15:29.911816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.923 [2024-07-24 22:15:29.911829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.923 [2024-07-24 22:15:29.912001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.923 [2024-07-24 22:15:29.912171] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.923 [2024-07-24 22:15:29.912182] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.923 [2024-07-24 22:15:29.912191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.923 [2024-07-24 22:15:29.914860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.923 [2024-07-24 22:15:29.924478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.923 [2024-07-24 22:15:29.925010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.923 [2024-07-24 22:15:29.925030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.923 [2024-07-24 22:15:29.925041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.923 [2024-07-24 22:15:29.925213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.923 [2024-07-24 22:15:29.925384] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.923 [2024-07-24 22:15:29.925395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.923 [2024-07-24 22:15:29.925405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.923 [2024-07-24 22:15:29.928074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.923 Malloc0 00:27:50.923 22:15:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.923 22:15:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:50.923 22:15:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.923 22:15:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:50.923 [2024-07-24 22:15:29.937367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.923 [2024-07-24 22:15:29.937881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.923 [2024-07-24 22:15:29.937901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.923 [2024-07-24 22:15:29.937911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.923 [2024-07-24 22:15:29.938081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.923 [2024-07-24 22:15:29.938252] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.923 [2024-07-24 22:15:29.938264] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.923 [2024-07-24 22:15:29.938273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.923 22:15:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.923 [2024-07-24 22:15:29.940945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.923 22:15:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:50.923 22:15:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.923 22:15:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:50.923 22:15:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.923 22:15:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:50.923 22:15:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.923 22:15:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:50.923 [2024-07-24 22:15:29.950236] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.923 [2024-07-24 22:15:29.950754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.923 [2024-07-24 22:15:29.950773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88fa70 with addr=10.0.0.2, port=4420 00:27:50.923 [2024-07-24 22:15:29.950784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88fa70 is same with the state(5) to be set 00:27:50.923 [2024-07-24 22:15:29.950956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88fa70 (9): Bad file descriptor 00:27:50.923 [2024-07-24 22:15:29.951126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.923 [2024-07-24 22:15:29.951137] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.923 [2024-07-24 22:15:29.951146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.923 [2024-07-24 22:15:29.952360] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:50.923 [2024-07-24 22:15:29.953820] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.923 22:15:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.923 22:15:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2846407 00:27:50.923 [2024-07-24 22:15:29.963109] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.923 [2024-07-24 22:15:30.003793] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:00.909 00:28:00.909 Latency(us) 00:28:00.909 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:00.909 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:00.909 Verification LBA range: start 0x0 length 0x4000 00:28:00.909 Nvme1n1 : 15.01 8678.47 33.90 13393.97 0.00 5780.39 635.70 15623.78 00:28:00.909 =================================================================================================================== 00:28:00.909 Total : 8678.47 33.90 13393.97 0.00 5780.39 635.70 15623.78 00:28:00.909 22:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:28:00.909 22:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:00.909 22:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.909 22:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:00.909 22:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.909 22:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:00.909 22:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:00.909 22:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:00.909 22:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:28:00.909 22:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:00.909 22:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:28:00.909 22:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:00.909 22:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:00.909 rmmod nvme_tcp 00:28:00.909 rmmod nvme_fabrics 00:28:00.909 rmmod nvme_keyring 00:28:00.909 22:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:00.909 22:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:28:00.909 22:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:28:00.909 22:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2847278 ']' 00:28:00.909 22:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2847278 00:28:00.909 22:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 2847278 ']' 00:28:00.909 22:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 2847278 00:28:00.909 22:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:28:00.909 22:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:00.909 22:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2847278 00:28:00.909 22:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:00.909 22:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:00.909 22:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2847278' 00:28:00.909 killing process with pid 2847278 00:28:00.909 22:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 2847278 00:28:00.909 22:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 2847278 00:28:00.909 22:15:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:00.909 22:15:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:00.909 22:15:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:00.909 22:15:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:00.909 22:15:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:00.909 22:15:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:00.909 22:15:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:00.909 22:15:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:02.289 22:15:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:02.289 00:28:02.289 real 0m27.337s 00:28:02.289 user 1m2.822s 00:28:02.289 sys 0m7.956s 00:28:02.289 22:15:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:02.289 22:15:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:02.289 ************************************ 00:28:02.289 END TEST nvmf_bdevperf 00:28:02.289 ************************************ 00:28:02.289 22:15:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:02.289 22:15:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:02.289 22:15:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:02.289 22:15:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.289 ************************************ 00:28:02.289 START TEST nvmf_target_disconnect 00:28:02.289 ************************************ 00:28:02.289 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:02.289 * Looking for test storage... 00:28:02.289 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:02.289 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:02.289 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:28:02.289 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:02.289 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:02.289 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:02.289 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:02.289 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:02.289 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:02.289 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:02.289 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:02.289 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:02.289 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:02.289 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:28:02.289 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:28:02.289 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:02.289 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:02.289 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:02.289 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:02.289 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:02.289 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:02.289 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:02.289 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:02.290 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.290 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.290 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.290 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:28:02.290 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.290 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:28:02.290 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:02.290 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:02.290 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:02.290 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:02.290 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:02.290 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:02.290 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:02.290 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:02.290 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:02.290 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:02.290 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:02.290 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:28:02.290 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:02.290 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:02.290 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:02.290 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:02.290 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:02.290 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:02.290 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:02.290 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:02.290 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:02.290 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:02.290 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:28:02.290 22:15:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:08.862 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:08.862 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:28:08.862 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:08.862 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:08.862 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:08.862 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:08.862 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:08.862 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:28:08.862 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:08.862 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:28:08.862 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:28:08.862 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:28:08.862 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:28:08.862 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:28:08.862 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:28:08.862 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:08.862 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:08.862 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:08.862 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:08.862 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:08.862 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:08.862 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:08.862 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:08.862 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:08.862 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:08.862 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:08.862 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:08.862 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:08.862 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:08.862 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:08.862 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:08.862 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:08.862 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:08.862 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:08.862 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:08.862 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:08.862 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:08.862 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:08.862 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:08.862 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:08.862 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:08.862 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:08.862 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:08.862 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:08.862 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:08.862 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:08.862 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:08.862 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:08.862 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:08.863 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:08.863 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:08.863 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:08.863 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:08.863 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:08.863 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:08.863 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:08.863 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:08.863 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:08.863 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:08.863 Found net devices under 0000:af:00.0: cvl_0_0 00:28:08.863 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:08.863 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:08.863 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:08.863 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:08.863 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:08.863 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:08.863 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:08.863 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:08.863 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:08.863 Found net devices under 0000:af:00.1: cvl_0_1 00:28:08.863 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:08.863 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:08.863 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:28:08.863 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:08.863 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:08.863 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:08.863 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:08.863 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:08.863 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:08.863 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:08.863 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:08.863 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:08.863 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:08.863 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:08.863 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:08.863 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:08.863 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:08.863 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:08.863 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:08.863 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:08.863 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:08.863 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:08.863 22:15:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:08.863 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:08.863 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:08.863 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:08.863 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:08.863 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:28:08.863 00:28:08.863 --- 10.0.0.2 ping statistics --- 00:28:08.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:08.863 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:28:08.863 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:08.863 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:08.863 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:28:08.863 00:28:08.863 --- 10.0.0.1 ping statistics --- 00:28:08.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:08.863 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:28:08.863 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:08.863 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:28:08.863 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:08.863 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:08.863 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:08.863 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:08.863 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:08.863 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:08.863 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:09.122 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:28:09.123 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:09.123 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:09.123 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:09.123 ************************************ 00:28:09.123 START TEST nvmf_target_disconnect_tc1 00:28:09.123 ************************************ 00:28:09.123 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:28:09.123 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:09.123 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:28:09.123 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:09.123 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:09.123 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:09.123 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:09.123 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:09.123 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:09.123 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:09.123 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:09.123 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:28:09.123 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:09.123 EAL: No free 2048 kB hugepages reported on node 1 00:28:09.123 [2024-07-24 22:15:48.245459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.123 [2024-07-24 22:15:48.245501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x627140 with addr=10.0.0.2, port=4420 00:28:09.123 [2024-07-24 22:15:48.245524] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:09.123 [2024-07-24 22:15:48.245534] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:09.123 [2024-07-24 22:15:48.245542] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:28:09.123 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:28:09.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:28:09.123 Initializing NVMe Controllers 00:28:09.123 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:28:09.123 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:09.123 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:09.123 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:09.123 00:28:09.123 real 0m0.118s 00:28:09.123 user 0m0.046s 00:28:09.123 sys 0m0.072s 00:28:09.123 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:09.123 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:09.123 ************************************ 00:28:09.123 END TEST nvmf_target_disconnect_tc1 00:28:09.123 ************************************ 00:28:09.123 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:28:09.123 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:09.123 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:09.123 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:09.383 ************************************ 00:28:09.384 START TEST nvmf_target_disconnect_tc2 00:28:09.384 ************************************ 00:28:09.384 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:28:09.384 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:28:09.384 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:09.384 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:09.384 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:09.384 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:09.384 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:09.384 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2852646 00:28:09.384 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2852646 00:28:09.384 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2852646 ']' 00:28:09.384 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:09.384 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:09.384 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:09.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:09.384 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:09.384 22:15:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:09.384 [2024-07-24 22:15:48.385485] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:28:09.384 [2024-07-24 22:15:48.385529] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:09.384 EAL: No free 2048 kB hugepages reported on node 1 00:28:09.384 [2024-07-24 22:15:48.470802] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:09.384 [2024-07-24 22:15:48.542416] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:09.384 [2024-07-24 22:15:48.542454] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:09.384 [2024-07-24 22:15:48.542464] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:09.384 [2024-07-24 22:15:48.542472] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:09.384 [2024-07-24 22:15:48.542479] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:09.384 [2024-07-24 22:15:48.542603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:28:09.384 [2024-07-24 22:15:48.542737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:28:09.384 [2024-07-24 22:15:48.542809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:09.384 [2024-07-24 22:15:48.542810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:28:10.337 22:15:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:10.337 22:15:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:28:10.337 22:15:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:10.337 22:15:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:10.337 22:15:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:10.337 22:15:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:10.337 22:15:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:10.337 22:15:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.337 22:15:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:10.337 Malloc0 00:28:10.337 22:15:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.337 22:15:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:10.337 22:15:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.337 22:15:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:10.337 [2024-07-24 22:15:49.262831] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:10.337 22:15:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.337 22:15:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:10.337 22:15:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.337 22:15:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:10.337 22:15:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.337 22:15:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:10.337 22:15:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.337 22:15:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:10.337 22:15:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.337 22:15:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:10.337 22:15:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.337 22:15:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:10.337 [2024-07-24 22:15:49.291075] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:10.337 22:15:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.337 22:15:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:10.337 22:15:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.337 22:15:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:10.337 22:15:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.337 22:15:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2852792 00:28:10.337 22:15:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:28:10.337 22:15:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:10.337 EAL: No free 2048 kB hugepages reported on node 1 00:28:12.247 22:15:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2852646 00:28:12.247 22:15:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:28:12.247 Read completed with error (sct=0, sc=8) 00:28:12.247 starting I/O failed 00:28:12.247 Read completed with error (sct=0, sc=8) 00:28:12.247 starting I/O failed 00:28:12.247 Read completed with error (sct=0, sc=8) 00:28:12.247 starting I/O failed 00:28:12.247 Read completed with error (sct=0, sc=8) 00:28:12.247 starting I/O failed 00:28:12.247 Read completed with error (sct=0, sc=8) 00:28:12.247 starting I/O failed 00:28:12.247 Read completed with error (sct=0, sc=8) 00:28:12.247 starting I/O failed 00:28:12.247 Read completed with error (sct=0, sc=8) 00:28:12.247 starting I/O failed 00:28:12.247 Read completed with error (sct=0, sc=8) 00:28:12.247 starting I/O failed 00:28:12.247 Read completed with error (sct=0, sc=8) 00:28:12.247 starting I/O failed 00:28:12.247 Read completed with error (sct=0, sc=8) 00:28:12.247 starting I/O failed 00:28:12.247 Read completed with error (sct=0, sc=8) 00:28:12.247 starting I/O failed 00:28:12.247 Read completed with error (sct=0, sc=8) 00:28:12.247 starting I/O failed 00:28:12.247 Read completed with error (sct=0, sc=8) 00:28:12.247 starting I/O failed 00:28:12.247 Write completed with error (sct=0, sc=8) 00:28:12.247 starting I/O failed 00:28:12.247 Write completed with error (sct=0, sc=8) 00:28:12.247 starting I/O failed 00:28:12.247 Read completed with error (sct=0, sc=8) 00:28:12.247 starting I/O failed 00:28:12.247 Write completed with error (sct=0, sc=8) 00:28:12.247 starting I/O failed 00:28:12.247 Read completed with error (sct=0, sc=8) 00:28:12.247 starting I/O failed 00:28:12.247 Read completed with error (sct=0, sc=8) 00:28:12.247 starting I/O failed 00:28:12.247 Write completed with error (sct=0, sc=8) 00:28:12.247 starting I/O failed 00:28:12.247 Read completed with error (sct=0, sc=8) 00:28:12.247 starting I/O failed 00:28:12.247 Write completed with error (sct=0, sc=8) 00:28:12.247 starting I/O failed 00:28:12.247 Read completed with error (sct=0, sc=8) 00:28:12.247 starting I/O failed 00:28:12.247 Write completed with error (sct=0, sc=8) 00:28:12.247 starting I/O failed 00:28:12.247 Read completed with error (sct=0, sc=8) 00:28:12.247 starting I/O failed 00:28:12.247 Write completed with error (sct=0, sc=8) 00:28:12.247 starting I/O failed 00:28:12.247 Read completed with error (sct=0, sc=8) 00:28:12.247 starting I/O failed 00:28:12.247 Read completed with error (sct=0, sc=8) 00:28:12.247 starting I/O failed 00:28:12.247 Write completed with error (sct=0, sc=8) 00:28:12.247 starting I/O failed 00:28:12.247 Read completed with error (sct=0, sc=8) 00:28:12.247 starting I/O failed 00:28:12.247 Read completed with error (sct=0, sc=8) 00:28:12.247 starting I/O failed 00:28:12.247 Read completed with error (sct=0, sc=8) 00:28:12.247 starting I/O failed 00:28:12.247 Read completed with error (sct=0, sc=8) 00:28:12.247 starting I/O failed 00:28:12.247 Read completed with error (sct=0, sc=8) 00:28:12.247 starting I/O failed 00:28:12.247 Read completed with error (sct=0, sc=8) 00:28:12.247 starting I/O failed 00:28:12.247 Read completed with error (sct=0, sc=8) 00:28:12.247 [2024-07-24 22:15:51.318343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.247 starting I/O failed 00:28:12.247 Read completed with error (sct=0, sc=8) 00:28:12.247 starting I/O failed 00:28:12.247 Read completed with error (sct=0, sc=8) 00:28:12.247 starting I/O failed 00:28:12.247 Read completed with error (sct=0, sc=8) 00:28:12.247 starting I/O failed 00:28:12.247 Read completed with error (sct=0, sc=8) 00:28:12.247 starting I/O failed 00:28:12.247 Read completed with error (sct=0, sc=8) 00:28:12.247 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Write completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Write completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Write completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Write completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Write completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Write completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Write completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Write completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Write completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Write completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 [2024-07-24 22:15:51.318608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Write completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Write completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Write completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Write completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Write completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Write completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Write completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Write completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Write completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Write completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Write completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Write completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Write completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Write completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 [2024-07-24 22:15:51.318836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Write completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Write completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Write completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Write completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Write completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Read completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 Write completed with error (sct=0, sc=8) 00:28:12.248 starting I/O failed 00:28:12.248 [2024-07-24 22:15:51.319059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:12.248 [2024-07-24 22:15:51.319317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.248 [2024-07-24 22:15:51.319336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.248 qpair failed and we were unable to recover it. 00:28:12.248 [2024-07-24 22:15:51.319667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.248 [2024-07-24 22:15:51.319682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.248 qpair failed and we were unable to recover it. 00:28:12.248 [2024-07-24 22:15:51.319860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.248 [2024-07-24 22:15:51.319874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.248 qpair failed and we were unable to recover it. 00:28:12.248 [2024-07-24 22:15:51.320122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.248 [2024-07-24 22:15:51.320136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.248 qpair failed and we were unable to recover it. 00:28:12.248 [2024-07-24 22:15:51.320380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.248 [2024-07-24 22:15:51.320394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.248 qpair failed and we were unable to recover it. 00:28:12.248 [2024-07-24 22:15:51.320578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.248 [2024-07-24 22:15:51.320620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.248 qpair failed and we were unable to recover it. 00:28:12.248 [2024-07-24 22:15:51.320935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.249 [2024-07-24 22:15:51.320977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.249 qpair failed and we were unable to recover it. 00:28:12.249 [2024-07-24 22:15:51.321316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.249 [2024-07-24 22:15:51.321356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.249 qpair failed and we were unable to recover it. 00:28:12.249 [2024-07-24 22:15:51.321658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.249 [2024-07-24 22:15:51.321698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.249 qpair failed and we were unable to recover it. 00:28:12.249 [2024-07-24 22:15:51.321957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.249 [2024-07-24 22:15:51.321998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.249 qpair failed and we were unable to recover it. 00:28:12.249 [2024-07-24 22:15:51.322288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.249 [2024-07-24 22:15:51.322330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.249 qpair failed and we were unable to recover it. 00:28:12.249 [2024-07-24 22:15:51.322580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.249 [2024-07-24 22:15:51.322593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.249 qpair failed and we were unable to recover it. 00:28:12.249 [2024-07-24 22:15:51.322855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.249 [2024-07-24 22:15:51.322869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.249 qpair failed and we were unable to recover it. 00:28:12.249 [2024-07-24 22:15:51.323091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.249 [2024-07-24 22:15:51.323105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.249 qpair failed and we were unable to recover it. 00:28:12.249 [2024-07-24 22:15:51.323342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.249 [2024-07-24 22:15:51.323355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.249 qpair failed and we were unable to recover it. 00:28:12.249 [2024-07-24 22:15:51.323562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.249 [2024-07-24 22:15:51.323575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.249 qpair failed and we were unable to recover it. 00:28:12.249 [2024-07-24 22:15:51.323760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.249 [2024-07-24 22:15:51.323773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.249 qpair failed and we were unable to recover it. 00:28:12.249 [2024-07-24 22:15:51.324096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.249 [2024-07-24 22:15:51.324137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.249 qpair failed and we were unable to recover it. 00:28:12.249 [2024-07-24 22:15:51.324422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.249 [2024-07-24 22:15:51.324463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.249 qpair failed and we were unable to recover it. 00:28:12.249 [2024-07-24 22:15:51.324760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.249 [2024-07-24 22:15:51.324774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.249 qpair failed and we were unable to recover it. 00:28:12.249 [2024-07-24 22:15:51.325025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.249 [2024-07-24 22:15:51.325038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.249 qpair failed and we were unable to recover it. 00:28:12.249 [2024-07-24 22:15:51.325275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.249 [2024-07-24 22:15:51.325289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.249 qpair failed and we were unable to recover it. 00:28:12.249 [2024-07-24 22:15:51.325508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.249 [2024-07-24 22:15:51.325522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.249 qpair failed and we were unable to recover it. 00:28:12.249 [2024-07-24 22:15:51.325676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.249 [2024-07-24 22:15:51.325690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.249 qpair failed and we were unable to recover it. 00:28:12.249 [2024-07-24 22:15:51.325878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.249 [2024-07-24 22:15:51.325891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.249 qpair failed and we were unable to recover it. 00:28:12.249 [2024-07-24 22:15:51.326186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.249 [2024-07-24 22:15:51.326199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.249 qpair failed and we were unable to recover it. 00:28:12.249 [2024-07-24 22:15:51.326380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.249 [2024-07-24 22:15:51.326394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.249 qpair failed and we were unable to recover it. 00:28:12.249 [2024-07-24 22:15:51.326568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.249 [2024-07-24 22:15:51.326581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.249 qpair failed and we were unable to recover it. 00:28:12.249 [2024-07-24 22:15:51.326808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.249 [2024-07-24 22:15:51.326822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.249 qpair failed and we were unable to recover it. 00:28:12.249 [2024-07-24 22:15:51.327055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.249 [2024-07-24 22:15:51.327069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.249 qpair failed and we were unable to recover it. 00:28:12.249 [2024-07-24 22:15:51.327296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.249 [2024-07-24 22:15:51.327313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.249 qpair failed and we were unable to recover it. 00:28:12.249 [2024-07-24 22:15:51.327550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.249 [2024-07-24 22:15:51.327567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.249 qpair failed and we were unable to recover it. 00:28:12.249 [2024-07-24 22:15:51.327802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.249 [2024-07-24 22:15:51.327819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.249 qpair failed and we were unable to recover it. 00:28:12.249 [2024-07-24 22:15:51.328048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.249 [2024-07-24 22:15:51.328065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.249 qpair failed and we were unable to recover it. 00:28:12.249 [2024-07-24 22:15:51.328303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.249 [2024-07-24 22:15:51.328320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.249 qpair failed and we were unable to recover it. 00:28:12.249 [2024-07-24 22:15:51.328629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.249 [2024-07-24 22:15:51.328646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.249 qpair failed and we were unable to recover it. 00:28:12.249 [2024-07-24 22:15:51.328824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.249 [2024-07-24 22:15:51.328841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.249 qpair failed and we were unable to recover it. 00:28:12.249 [2024-07-24 22:15:51.329143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.249 [2024-07-24 22:15:51.329160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.249 qpair failed and we were unable to recover it. 00:28:12.249 [2024-07-24 22:15:51.329405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.249 [2024-07-24 22:15:51.329422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.249 qpair failed and we were unable to recover it. 00:28:12.249 [2024-07-24 22:15:51.329720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.249 [2024-07-24 22:15:51.329738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.249 qpair failed and we were unable to recover it. 00:28:12.249 [2024-07-24 22:15:51.329977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.249 [2024-07-24 22:15:51.329995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.249 qpair failed and we were unable to recover it. 00:28:12.249 [2024-07-24 22:15:51.330221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.249 [2024-07-24 22:15:51.330238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.249 qpair failed and we were unable to recover it. 00:28:12.249 [2024-07-24 22:15:51.330479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.249 [2024-07-24 22:15:51.330497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.249 qpair failed and we were unable to recover it. 00:28:12.249 [2024-07-24 22:15:51.330667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.250 [2024-07-24 22:15:51.330684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.250 qpair failed and we were unable to recover it. 00:28:12.250 [2024-07-24 22:15:51.330942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.250 [2024-07-24 22:15:51.330959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.250 qpair failed and we were unable to recover it. 00:28:12.250 [2024-07-24 22:15:51.331276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.250 [2024-07-24 22:15:51.331293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.250 qpair failed and we were unable to recover it. 00:28:12.250 [2024-07-24 22:15:51.331481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.250 [2024-07-24 22:15:51.331498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.250 qpair failed and we were unable to recover it. 00:28:12.250 [2024-07-24 22:15:51.331746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.250 [2024-07-24 22:15:51.331763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.250 qpair failed and we were unable to recover it. 00:28:12.250 [2024-07-24 22:15:51.331929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.250 [2024-07-24 22:15:51.331946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.250 qpair failed and we were unable to recover it. 00:28:12.250 [2024-07-24 22:15:51.332183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.250 [2024-07-24 22:15:51.332200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.250 qpair failed and we were unable to recover it. 00:28:12.250 [2024-07-24 22:15:51.332439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.250 [2024-07-24 22:15:51.332456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.250 qpair failed and we were unable to recover it. 00:28:12.250 [2024-07-24 22:15:51.332686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.250 [2024-07-24 22:15:51.332703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.250 qpair failed and we were unable to recover it. 00:28:12.250 [2024-07-24 22:15:51.332886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.250 [2024-07-24 22:15:51.332908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.250 qpair failed and we were unable to recover it. 00:28:12.250 [2024-07-24 22:15:51.333227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.250 [2024-07-24 22:15:51.333245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.250 qpair failed and we were unable to recover it. 00:28:12.250 [2024-07-24 22:15:51.333429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.250 [2024-07-24 22:15:51.333446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:12.250 qpair failed and we were unable to recover it. 00:28:12.250 [2024-07-24 22:15:51.333644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.250 [2024-07-24 22:15:51.333664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.250 qpair failed and we were unable to recover it. 00:28:12.250 [2024-07-24 22:15:51.333895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.250 [2024-07-24 22:15:51.333909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.250 qpair failed and we were unable to recover it. 00:28:12.250 [2024-07-24 22:15:51.334062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.250 [2024-07-24 22:15:51.334075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.250 qpair failed and we were unable to recover it. 00:28:12.250 [2024-07-24 22:15:51.334373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.250 [2024-07-24 22:15:51.334387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.250 qpair failed and we were unable to recover it. 00:28:12.250 [2024-07-24 22:15:51.334544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.250 [2024-07-24 22:15:51.334558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.250 qpair failed and we were unable to recover it. 00:28:12.250 [2024-07-24 22:15:51.334774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.250 [2024-07-24 22:15:51.334788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.250 qpair failed and we were unable to recover it. 00:28:12.250 [2024-07-24 22:15:51.335101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.250 [2024-07-24 22:15:51.335115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.250 qpair failed and we were unable to recover it. 00:28:12.250 [2024-07-24 22:15:51.335425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.250 [2024-07-24 22:15:51.335439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.250 qpair failed and we were unable to recover it. 00:28:12.250 [2024-07-24 22:15:51.335661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.250 [2024-07-24 22:15:51.335677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.250 qpair failed and we were unable to recover it. 00:28:12.250 [2024-07-24 22:15:51.335923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.250 [2024-07-24 22:15:51.335937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.250 qpair failed and we were unable to recover it. 00:28:12.250 [2024-07-24 22:15:51.336189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.250 [2024-07-24 22:15:51.336202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.250 qpair failed and we were unable to recover it. 00:28:12.250 [2024-07-24 22:15:51.336432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.250 [2024-07-24 22:15:51.336446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.250 qpair failed and we were unable to recover it. 00:28:12.250 [2024-07-24 22:15:51.336734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.250 [2024-07-24 22:15:51.336748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.250 qpair failed and we were unable to recover it. 00:28:12.250 [2024-07-24 22:15:51.336901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.250 [2024-07-24 22:15:51.336914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.250 qpair failed and we were unable to recover it. 00:28:12.250 [2024-07-24 22:15:51.337140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.250 [2024-07-24 22:15:51.337154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.250 qpair failed and we were unable to recover it. 00:28:12.250 [2024-07-24 22:15:51.337321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.250 [2024-07-24 22:15:51.337334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.250 qpair failed and we were unable to recover it. 00:28:12.250 [2024-07-24 22:15:51.337553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.250 [2024-07-24 22:15:51.337566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.250 qpair failed and we were unable to recover it. 00:28:12.250 [2024-07-24 22:15:51.337891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.250 [2024-07-24 22:15:51.337905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.250 qpair failed and we were unable to recover it. 00:28:12.250 [2024-07-24 22:15:51.338133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.250 [2024-07-24 22:15:51.338146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.250 qpair failed and we were unable to recover it. 00:28:12.250 [2024-07-24 22:15:51.338363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.250 [2024-07-24 22:15:51.338377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.250 qpair failed and we were unable to recover it. 00:28:12.250 [2024-07-24 22:15:51.338538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.250 [2024-07-24 22:15:51.338551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.250 qpair failed and we were unable to recover it. 00:28:12.250 [2024-07-24 22:15:51.338842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.250 [2024-07-24 22:15:51.338855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.250 qpair failed and we were unable to recover it. 00:28:12.250 [2024-07-24 22:15:51.339094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.250 [2024-07-24 22:15:51.339107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.250 qpair failed and we were unable to recover it. 00:28:12.250 [2024-07-24 22:15:51.339352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.250 [2024-07-24 22:15:51.339365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.250 qpair failed and we were unable to recover it. 00:28:12.250 [2024-07-24 22:15:51.339591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.250 [2024-07-24 22:15:51.339604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.250 qpair failed and we were unable to recover it. 00:28:12.250 [2024-07-24 22:15:51.339847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.251 [2024-07-24 22:15:51.339861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.251 qpair failed and we were unable to recover it. 00:28:12.251 [2024-07-24 22:15:51.340050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.251 [2024-07-24 22:15:51.340063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.251 qpair failed and we were unable to recover it. 00:28:12.251 [2024-07-24 22:15:51.340279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.251 [2024-07-24 22:15:51.340293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.251 qpair failed and we were unable to recover it. 00:28:12.251 [2024-07-24 22:15:51.340442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.251 [2024-07-24 22:15:51.340455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.251 qpair failed and we were unable to recover it. 00:28:12.251 [2024-07-24 22:15:51.340667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.251 [2024-07-24 22:15:51.340681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.251 qpair failed and we were unable to recover it. 00:28:12.251 [2024-07-24 22:15:51.340787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.251 [2024-07-24 22:15:51.340800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.251 qpair failed and we were unable to recover it. 00:28:12.251 [2024-07-24 22:15:51.341030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.251 [2024-07-24 22:15:51.341043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.251 qpair failed and we were unable to recover it. 00:28:12.251 [2024-07-24 22:15:51.341351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.251 [2024-07-24 22:15:51.341365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.251 qpair failed and we were unable to recover it. 00:28:12.251 [2024-07-24 22:15:51.341544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.251 [2024-07-24 22:15:51.341557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.251 qpair failed and we were unable to recover it. 00:28:12.251 [2024-07-24 22:15:51.341786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.251 [2024-07-24 22:15:51.341800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.251 qpair failed and we were unable to recover it. 00:28:12.251 [2024-07-24 22:15:51.342039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.251 [2024-07-24 22:15:51.342053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.251 qpair failed and we were unable to recover it. 00:28:12.251 [2024-07-24 22:15:51.342270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.251 [2024-07-24 22:15:51.342283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.251 qpair failed and we were unable to recover it. 00:28:12.251 [2024-07-24 22:15:51.342522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.251 [2024-07-24 22:15:51.342535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.251 qpair failed and we were unable to recover it. 00:28:12.251 [2024-07-24 22:15:51.342742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.251 [2024-07-24 22:15:51.342755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.251 qpair failed and we were unable to recover it. 00:28:12.251 [2024-07-24 22:15:51.342967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.251 [2024-07-24 22:15:51.342980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.251 qpair failed and we were unable to recover it. 00:28:12.251 [2024-07-24 22:15:51.343131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.251 [2024-07-24 22:15:51.343145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.251 qpair failed and we were unable to recover it. 00:28:12.251 [2024-07-24 22:15:51.343291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.251 [2024-07-24 22:15:51.343304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.251 qpair failed and we were unable to recover it. 00:28:12.251 [2024-07-24 22:15:51.343613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.251 [2024-07-24 22:15:51.343626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.251 qpair failed and we were unable to recover it. 00:28:12.251 [2024-07-24 22:15:51.343859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.251 [2024-07-24 22:15:51.343873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.251 qpair failed and we were unable to recover it. 00:28:12.251 [2024-07-24 22:15:51.344113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.251 [2024-07-24 22:15:51.344126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.251 qpair failed and we were unable to recover it. 00:28:12.251 [2024-07-24 22:15:51.344275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.251 [2024-07-24 22:15:51.344288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.251 qpair failed and we were unable to recover it. 00:28:12.251 [2024-07-24 22:15:51.344541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.251 [2024-07-24 22:15:51.344554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.251 qpair failed and we were unable to recover it. 00:28:12.251 [2024-07-24 22:15:51.344727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.251 [2024-07-24 22:15:51.344741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.251 qpair failed and we were unable to recover it. 00:28:12.251 [2024-07-24 22:15:51.344983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.251 [2024-07-24 22:15:51.344999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.251 qpair failed and we were unable to recover it. 00:28:12.251 [2024-07-24 22:15:51.345159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.251 [2024-07-24 22:15:51.345172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.251 qpair failed and we were unable to recover it. 00:28:12.251 [2024-07-24 22:15:51.345352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.251 [2024-07-24 22:15:51.345365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.251 qpair failed and we were unable to recover it. 00:28:12.251 [2024-07-24 22:15:51.345602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.251 [2024-07-24 22:15:51.345616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.251 qpair failed and we were unable to recover it. 00:28:12.251 [2024-07-24 22:15:51.345713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.251 [2024-07-24 22:15:51.345731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.251 qpair failed and we were unable to recover it. 00:28:12.251 [2024-07-24 22:15:51.345968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.251 [2024-07-24 22:15:51.345981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.251 qpair failed and we were unable to recover it. 00:28:12.251 [2024-07-24 22:15:51.346225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.251 [2024-07-24 22:15:51.346238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.251 qpair failed and we were unable to recover it. 00:28:12.251 [2024-07-24 22:15:51.346345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.251 [2024-07-24 22:15:51.346357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.251 qpair failed and we were unable to recover it. 00:28:12.251 [2024-07-24 22:15:51.346658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.251 [2024-07-24 22:15:51.346671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.251 qpair failed and we were unable to recover it. 00:28:12.251 [2024-07-24 22:15:51.346910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.251 [2024-07-24 22:15:51.346924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.251 qpair failed and we were unable to recover it. 00:28:12.251 [2024-07-24 22:15:51.347155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.251 [2024-07-24 22:15:51.347167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.251 qpair failed and we were unable to recover it. 00:28:12.251 [2024-07-24 22:15:51.347338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.251 [2024-07-24 22:15:51.347350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.251 qpair failed and we were unable to recover it. 00:28:12.251 [2024-07-24 22:15:51.347521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.251 [2024-07-24 22:15:51.347532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.252 qpair failed and we were unable to recover it. 00:28:12.252 [2024-07-24 22:15:51.347769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.252 [2024-07-24 22:15:51.347780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.252 qpair failed and we were unable to recover it. 00:28:12.252 [2024-07-24 22:15:51.347955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.252 [2024-07-24 22:15:51.347967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.252 qpair failed and we were unable to recover it. 00:28:12.252 [2024-07-24 22:15:51.348189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.252 [2024-07-24 22:15:51.348200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.252 qpair failed and we were unable to recover it. 00:28:12.252 [2024-07-24 22:15:51.348423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.252 [2024-07-24 22:15:51.348435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.252 qpair failed and we were unable to recover it. 00:28:12.252 [2024-07-24 22:15:51.348742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.252 [2024-07-24 22:15:51.348754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.252 qpair failed and we were unable to recover it. 00:28:12.252 [2024-07-24 22:15:51.368639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.252 [2024-07-24 22:15:51.368652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.252 qpair failed and we were unable to recover it. 00:28:12.252 [2024-07-24 22:15:51.369011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.252 [2024-07-24 22:15:51.369023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.252 qpair failed and we were unable to recover it. 00:28:12.252 [2024-07-24 22:15:51.369331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.252 [2024-07-24 22:15:51.369343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.252 qpair failed and we were unable to recover it. 00:28:12.252 [2024-07-24 22:15:51.369571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.252 [2024-07-24 22:15:51.369583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.252 qpair failed and we were unable to recover it. 00:28:12.252 [2024-07-24 22:15:51.369749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.252 [2024-07-24 22:15:51.369761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.252 qpair failed and we were unable to recover it. 00:28:12.252 [2024-07-24 22:15:51.370022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.252 [2024-07-24 22:15:51.370033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.252 qpair failed and we were unable to recover it. 00:28:12.252 [2024-07-24 22:15:51.370326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.252 [2024-07-24 22:15:51.370337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.252 qpair failed and we were unable to recover it. 00:28:12.252 [2024-07-24 22:15:51.370557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.252 [2024-07-24 22:15:51.370568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.252 qpair failed and we were unable to recover it. 00:28:12.252 [2024-07-24 22:15:51.370828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.252 [2024-07-24 22:15:51.370840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.252 qpair failed and we were unable to recover it. 00:28:12.252 [2024-07-24 22:15:51.371161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.252 [2024-07-24 22:15:51.371173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.252 qpair failed and we were unable to recover it. 00:28:12.252 [2024-07-24 22:15:51.371405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.252 [2024-07-24 22:15:51.371416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.252 qpair failed and we were unable to recover it. 00:28:12.252 [2024-07-24 22:15:51.371584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.252 [2024-07-24 22:15:51.371595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.252 qpair failed and we were unable to recover it. 00:28:12.252 [2024-07-24 22:15:51.371762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.252 [2024-07-24 22:15:51.371774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.252 qpair failed and we were unable to recover it. 00:28:12.252 [2024-07-24 22:15:51.372016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.252 [2024-07-24 22:15:51.372027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.252 qpair failed and we were unable to recover it. 00:28:12.252 [2024-07-24 22:15:51.372195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.252 [2024-07-24 22:15:51.372206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.252 qpair failed and we were unable to recover it. 00:28:12.252 [2024-07-24 22:15:51.372489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.252 [2024-07-24 22:15:51.372500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.252 qpair failed and we were unable to recover it. 00:28:12.252 [2024-07-24 22:15:51.372670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.252 [2024-07-24 22:15:51.372681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.252 qpair failed and we were unable to recover it. 00:28:12.252 [2024-07-24 22:15:51.372862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.252 [2024-07-24 22:15:51.372873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.252 qpair failed and we were unable to recover it. 00:28:12.253 [2024-07-24 22:15:51.373123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.253 [2024-07-24 22:15:51.373134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.253 qpair failed and we were unable to recover it. 00:28:12.253 [2024-07-24 22:15:51.373441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.253 [2024-07-24 22:15:51.373452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.253 qpair failed and we were unable to recover it. 00:28:12.253 [2024-07-24 22:15:51.373686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.253 [2024-07-24 22:15:51.373697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.253 qpair failed and we were unable to recover it. 00:28:12.253 [2024-07-24 22:15:51.374031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.253 [2024-07-24 22:15:51.374043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.253 qpair failed and we were unable to recover it. 00:28:12.253 [2024-07-24 22:15:51.374347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.253 [2024-07-24 22:15:51.374360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.253 qpair failed and we were unable to recover it. 00:28:12.253 [2024-07-24 22:15:51.374534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.253 [2024-07-24 22:15:51.374545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.253 qpair failed and we were unable to recover it. 00:28:12.253 [2024-07-24 22:15:51.374708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.253 [2024-07-24 22:15:51.374723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.253 qpair failed and we were unable to recover it. 00:28:12.253 [2024-07-24 22:15:51.374899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.253 [2024-07-24 22:15:51.374911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.253 qpair failed and we were unable to recover it. 00:28:12.253 [2024-07-24 22:15:51.375127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.253 [2024-07-24 22:15:51.375138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.253 qpair failed and we were unable to recover it. 00:28:12.253 [2024-07-24 22:15:51.375446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.253 [2024-07-24 22:15:51.375457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.253 qpair failed and we were unable to recover it. 00:28:12.253 [2024-07-24 22:15:51.375564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.253 [2024-07-24 22:15:51.375576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.253 qpair failed and we were unable to recover it. 00:28:12.253 [2024-07-24 22:15:51.375811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.253 [2024-07-24 22:15:51.375823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.253 qpair failed and we were unable to recover it. 00:28:12.253 [2024-07-24 22:15:51.376039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.253 [2024-07-24 22:15:51.376050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.253 qpair failed and we were unable to recover it. 00:28:12.253 [2024-07-24 22:15:51.376356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.253 [2024-07-24 22:15:51.376367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.253 qpair failed and we were unable to recover it. 00:28:12.253 [2024-07-24 22:15:51.376651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.253 [2024-07-24 22:15:51.376662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.253 qpair failed and we were unable to recover it. 00:28:12.253 [2024-07-24 22:15:51.376890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.253 [2024-07-24 22:15:51.376901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.253 qpair failed and we were unable to recover it. 00:28:12.253 [2024-07-24 22:15:51.377145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.253 [2024-07-24 22:15:51.377156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.253 qpair failed and we were unable to recover it. 00:28:12.253 [2024-07-24 22:15:51.377390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.253 [2024-07-24 22:15:51.377402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.253 qpair failed and we were unable to recover it. 00:28:12.253 [2024-07-24 22:15:51.377639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.253 [2024-07-24 22:15:51.377651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.253 qpair failed and we were unable to recover it. 00:28:12.253 [2024-07-24 22:15:51.377983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.253 [2024-07-24 22:15:51.377995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.253 qpair failed and we were unable to recover it. 00:28:12.253 [2024-07-24 22:15:51.378224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.253 [2024-07-24 22:15:51.378236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.253 qpair failed and we were unable to recover it. 00:28:12.253 [2024-07-24 22:15:51.378539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.253 [2024-07-24 22:15:51.378550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.253 qpair failed and we were unable to recover it. 00:28:12.253 [2024-07-24 22:15:51.378712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.253 [2024-07-24 22:15:51.378727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.253 qpair failed and we were unable to recover it. 00:28:12.253 [2024-07-24 22:15:51.378956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.253 [2024-07-24 22:15:51.378968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.253 qpair failed and we were unable to recover it. 00:28:12.253 [2024-07-24 22:15:51.379256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.253 [2024-07-24 22:15:51.379268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.253 qpair failed and we were unable to recover it. 00:28:12.253 [2024-07-24 22:15:51.379435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.253 [2024-07-24 22:15:51.379446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.253 qpair failed and we were unable to recover it. 00:28:12.253 [2024-07-24 22:15:51.379703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.253 [2024-07-24 22:15:51.379718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.253 qpair failed and we were unable to recover it. 00:28:12.253 [2024-07-24 22:15:51.384745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.253 [2024-07-24 22:15:51.384758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.253 qpair failed and we were unable to recover it. 00:28:12.253 [2024-07-24 22:15:51.385012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.253 [2024-07-24 22:15:51.385023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.253 qpair failed and we were unable to recover it. 00:28:12.253 [2024-07-24 22:15:51.385332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.253 [2024-07-24 22:15:51.385343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.253 qpair failed and we were unable to recover it. 00:28:12.253 [2024-07-24 22:15:51.385578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.253 [2024-07-24 22:15:51.385589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.253 qpair failed and we were unable to recover it. 00:28:12.253 [2024-07-24 22:15:51.385819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.253 [2024-07-24 22:15:51.385830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.253 qpair failed and we were unable to recover it. 00:28:12.253 [2024-07-24 22:15:51.386088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.253 [2024-07-24 22:15:51.386100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.253 qpair failed and we were unable to recover it. 00:28:12.253 [2024-07-24 22:15:51.386331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.253 [2024-07-24 22:15:51.386343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.253 qpair failed and we were unable to recover it. 00:28:12.253 [2024-07-24 22:15:51.386563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.253 [2024-07-24 22:15:51.386574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.253 qpair failed and we were unable to recover it. 00:28:12.253 [2024-07-24 22:15:51.386789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.253 [2024-07-24 22:15:51.386800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.253 qpair failed and we were unable to recover it. 00:28:12.253 [2024-07-24 22:15:51.387018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.254 [2024-07-24 22:15:51.387029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.254 qpair failed and we were unable to recover it. 00:28:12.254 [2024-07-24 22:15:51.387191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.254 [2024-07-24 22:15:51.387203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.254 qpair failed and we were unable to recover it. 00:28:12.254 [2024-07-24 22:15:51.387415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.254 [2024-07-24 22:15:51.387428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.254 qpair failed and we were unable to recover it. 00:28:12.254 [2024-07-24 22:15:51.387657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.254 [2024-07-24 22:15:51.387668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.254 qpair failed and we were unable to recover it. 00:28:12.254 [2024-07-24 22:15:51.387973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.254 [2024-07-24 22:15:51.387986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.254 qpair failed and we were unable to recover it. 00:28:12.254 [2024-07-24 22:15:51.388138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.254 [2024-07-24 22:15:51.388149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.254 qpair failed and we were unable to recover it. 00:28:12.254 [2024-07-24 22:15:51.388387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.254 [2024-07-24 22:15:51.388398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.254 qpair failed and we were unable to recover it. 00:28:12.254 [2024-07-24 22:15:51.388668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.254 [2024-07-24 22:15:51.388680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.254 qpair failed and we were unable to recover it. 00:28:12.254 [2024-07-24 22:15:51.388915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.254 [2024-07-24 22:15:51.388929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.254 qpair failed and we were unable to recover it. 00:28:12.254 [2024-07-24 22:15:51.389161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.254 [2024-07-24 22:15:51.389172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.254 qpair failed and we were unable to recover it. 00:28:12.254 [2024-07-24 22:15:51.389424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.254 [2024-07-24 22:15:51.389436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.254 qpair failed and we were unable to recover it. 00:28:12.254 [2024-07-24 22:15:51.389657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.254 [2024-07-24 22:15:51.389669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.254 qpair failed and we were unable to recover it. 00:28:12.254 [2024-07-24 22:15:51.389976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.254 [2024-07-24 22:15:51.389988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.254 qpair failed and we were unable to recover it. 00:28:12.254 [2024-07-24 22:15:51.390277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.254 [2024-07-24 22:15:51.390289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.254 qpair failed and we were unable to recover it. 00:28:12.254 [2024-07-24 22:15:51.390505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.254 [2024-07-24 22:15:51.390517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.254 qpair failed and we were unable to recover it. 00:28:12.254 [2024-07-24 22:15:51.390749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.254 [2024-07-24 22:15:51.390762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.254 qpair failed and we were unable to recover it. 00:28:12.254 [2024-07-24 22:15:51.391047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.254 [2024-07-24 22:15:51.391059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.254 qpair failed and we were unable to recover it. 00:28:12.254 [2024-07-24 22:15:51.391241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.254 [2024-07-24 22:15:51.391254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.254 qpair failed and we were unable to recover it. 00:28:12.254 [2024-07-24 22:15:51.391492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.254 [2024-07-24 22:15:51.391504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.254 qpair failed and we were unable to recover it. 00:28:12.254 [2024-07-24 22:15:51.391673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.254 [2024-07-24 22:15:51.391685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.254 qpair failed and we were unable to recover it. 00:28:12.254 [2024-07-24 22:15:51.391863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.254 [2024-07-24 22:15:51.391876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.254 qpair failed and we were unable to recover it. 00:28:12.254 [2024-07-24 22:15:51.392095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.254 [2024-07-24 22:15:51.392108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.254 qpair failed and we were unable to recover it. 00:28:12.254 [2024-07-24 22:15:51.392363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.254 [2024-07-24 22:15:51.392375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.254 qpair failed and we were unable to recover it. 00:28:12.254 [2024-07-24 22:15:51.392595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.254 [2024-07-24 22:15:51.392607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.254 qpair failed and we were unable to recover it. 00:28:12.254 [2024-07-24 22:15:51.392774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.254 [2024-07-24 22:15:51.392786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.254 qpair failed and we were unable to recover it. 00:28:12.254 [2024-07-24 22:15:51.393108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.254 [2024-07-24 22:15:51.393120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.254 qpair failed and we were unable to recover it. 00:28:12.254 [2024-07-24 22:15:51.393358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.254 [2024-07-24 22:15:51.393369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.254 qpair failed and we were unable to recover it. 00:28:12.254 [2024-07-24 22:15:51.393536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.254 [2024-07-24 22:15:51.393548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.254 qpair failed and we were unable to recover it. 00:28:12.254 [2024-07-24 22:15:51.393839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.254 [2024-07-24 22:15:51.393852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.254 qpair failed and we were unable to recover it. 00:28:12.254 [2024-07-24 22:15:51.394081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.254 [2024-07-24 22:15:51.394093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.254 qpair failed and we were unable to recover it. 00:28:12.254 [2024-07-24 22:15:51.394403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.254 [2024-07-24 22:15:51.394415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.254 qpair failed and we were unable to recover it. 00:28:12.254 [2024-07-24 22:15:51.394649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.254 [2024-07-24 22:15:51.394661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.254 qpair failed and we were unable to recover it. 00:28:12.254 [2024-07-24 22:15:51.394878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.254 [2024-07-24 22:15:51.394890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.254 qpair failed and we were unable to recover it. 00:28:12.254 [2024-07-24 22:15:51.395145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.254 [2024-07-24 22:15:51.395158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.254 qpair failed and we were unable to recover it. 00:28:12.254 [2024-07-24 22:15:51.395386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.254 [2024-07-24 22:15:51.395398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.254 qpair failed and we were unable to recover it. 00:28:12.254 [2024-07-24 22:15:51.395572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.254 [2024-07-24 22:15:51.395584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.254 qpair failed and we were unable to recover it. 00:28:12.254 [2024-07-24 22:15:51.395820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.254 [2024-07-24 22:15:51.395832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.254 qpair failed and we were unable to recover it. 00:28:12.255 [2024-07-24 22:15:51.396069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.255 [2024-07-24 22:15:51.396082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.255 qpair failed and we were unable to recover it. 00:28:12.255 [2024-07-24 22:15:51.396309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.255 [2024-07-24 22:15:51.396321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.255 qpair failed and we were unable to recover it. 00:28:12.255 [2024-07-24 22:15:51.396589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.255 [2024-07-24 22:15:51.396601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.255 qpair failed and we were unable to recover it. 00:28:12.255 [2024-07-24 22:15:51.396817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.255 [2024-07-24 22:15:51.396829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.255 qpair failed and we were unable to recover it. 00:28:12.255 [2024-07-24 22:15:51.397063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.255 [2024-07-24 22:15:51.397076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.255 qpair failed and we were unable to recover it. 00:28:12.255 [2024-07-24 22:15:51.397300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.255 [2024-07-24 22:15:51.397312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.255 qpair failed and we were unable to recover it. 00:28:12.255 [2024-07-24 22:15:51.397543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.255 [2024-07-24 22:15:51.397555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.255 qpair failed and we were unable to recover it. 00:28:12.255 [2024-07-24 22:15:51.397739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.255 [2024-07-24 22:15:51.397752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.255 qpair failed and we were unable to recover it. 00:28:12.255 [2024-07-24 22:15:51.398045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.255 [2024-07-24 22:15:51.398058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.255 qpair failed and we were unable to recover it. 00:28:12.255 [2024-07-24 22:15:51.398296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.255 [2024-07-24 22:15:51.398309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.255 qpair failed and we were unable to recover it. 00:28:12.255 [2024-07-24 22:15:51.398560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.255 [2024-07-24 22:15:51.398572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.255 qpair failed and we were unable to recover it. 00:28:12.255 [2024-07-24 22:15:51.398800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.255 [2024-07-24 22:15:51.398815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.255 qpair failed and we were unable to recover it. 00:28:12.255 [2024-07-24 22:15:51.399041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.255 [2024-07-24 22:15:51.399054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.255 qpair failed and we were unable to recover it. 00:28:12.255 [2024-07-24 22:15:51.399363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.255 [2024-07-24 22:15:51.399378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.255 qpair failed and we were unable to recover it. 00:28:12.255 [2024-07-24 22:15:51.399604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.255 [2024-07-24 22:15:51.399617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.255 qpair failed and we were unable to recover it. 00:28:12.255 [2024-07-24 22:15:51.399838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.255 [2024-07-24 22:15:51.399850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.255 qpair failed and we were unable to recover it. 00:28:12.255 [2024-07-24 22:15:51.400110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.255 [2024-07-24 22:15:51.400123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.255 qpair failed and we were unable to recover it. 00:28:12.255 [2024-07-24 22:15:51.400425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.255 [2024-07-24 22:15:51.400437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.255 qpair failed and we were unable to recover it. 00:28:12.255 [2024-07-24 22:15:51.400671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.255 [2024-07-24 22:15:51.400683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.255 qpair failed and we were unable to recover it. 00:28:12.255 [2024-07-24 22:15:51.400962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.255 [2024-07-24 22:15:51.400975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.255 qpair failed and we were unable to recover it. 00:28:12.255 [2024-07-24 22:15:51.401146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.255 [2024-07-24 22:15:51.401158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.255 qpair failed and we were unable to recover it. 00:28:12.255 [2024-07-24 22:15:51.401415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.255 [2024-07-24 22:15:51.401426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.255 qpair failed and we were unable to recover it. 00:28:12.255 [2024-07-24 22:15:51.401712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.255 [2024-07-24 22:15:51.401729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.255 qpair failed and we were unable to recover it. 00:28:12.255 [2024-07-24 22:15:51.401989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.255 [2024-07-24 22:15:51.402001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.255 qpair failed and we were unable to recover it. 00:28:12.255 [2024-07-24 22:15:51.402181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.255 [2024-07-24 22:15:51.402193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.255 qpair failed and we were unable to recover it. 00:28:12.255 [2024-07-24 22:15:51.402430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.255 [2024-07-24 22:15:51.402442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.255 qpair failed and we were unable to recover it. 00:28:12.255 [2024-07-24 22:15:51.402702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.255 [2024-07-24 22:15:51.402719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.255 qpair failed and we were unable to recover it. 00:28:12.255 [2024-07-24 22:15:51.403022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.255 [2024-07-24 22:15:51.403036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.255 qpair failed and we were unable to recover it. 00:28:12.255 [2024-07-24 22:15:51.403290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.255 [2024-07-24 22:15:51.403303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.255 qpair failed and we were unable to recover it. 00:28:12.255 [2024-07-24 22:15:51.403644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.255 [2024-07-24 22:15:51.403656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.255 qpair failed and we were unable to recover it. 00:28:12.255 [2024-07-24 22:15:51.403821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.255 [2024-07-24 22:15:51.403833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.255 qpair failed and we were unable to recover it. 00:28:12.255 [2024-07-24 22:15:51.404076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.255 [2024-07-24 22:15:51.404089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.255 qpair failed and we were unable to recover it. 00:28:12.255 [2024-07-24 22:15:51.404299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.255 [2024-07-24 22:15:51.404312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.255 qpair failed and we were unable to recover it. 00:28:12.255 [2024-07-24 22:15:51.404553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.255 [2024-07-24 22:15:51.404566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.255 qpair failed and we were unable to recover it. 00:28:12.255 [2024-07-24 22:15:51.404776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.255 [2024-07-24 22:15:51.404788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.255 qpair failed and we were unable to recover it. 00:28:12.255 [2024-07-24 22:15:51.405109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.255 [2024-07-24 22:15:51.405122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.255 qpair failed and we were unable to recover it. 00:28:12.256 [2024-07-24 22:15:51.405436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.256 [2024-07-24 22:15:51.405447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.256 qpair failed and we were unable to recover it. 00:28:12.256 [2024-07-24 22:15:51.405745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.256 [2024-07-24 22:15:51.405757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.256 qpair failed and we were unable to recover it. 00:28:12.256 [2024-07-24 22:15:51.406010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.256 [2024-07-24 22:15:51.406021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.256 qpair failed and we were unable to recover it. 00:28:12.256 [2024-07-24 22:15:51.406242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.256 [2024-07-24 22:15:51.406267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.256 qpair failed and we were unable to recover it. 00:28:12.256 [2024-07-24 22:15:51.406483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.831 [2024-07-24 22:15:51.913534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.831 qpair failed and we were unable to recover it. 00:28:12.831 [2024-07-24 22:15:51.913947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.831 [2024-07-24 22:15:51.913965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.831 qpair failed and we were unable to recover it. 00:28:12.831 [2024-07-24 22:15:51.914273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.831 [2024-07-24 22:15:51.914287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.831 qpair failed and we were unable to recover it. 00:28:12.831 [2024-07-24 22:15:51.914604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.831 [2024-07-24 22:15:51.914616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.831 qpair failed and we were unable to recover it. 00:28:12.831 [2024-07-24 22:15:51.914947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.831 [2024-07-24 22:15:51.914961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.831 qpair failed and we were unable to recover it. 00:28:12.831 [2024-07-24 22:15:51.915156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.831 [2024-07-24 22:15:51.915168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.831 qpair failed and we were unable to recover it. 00:28:12.831 [2024-07-24 22:15:51.915341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.831 [2024-07-24 22:15:51.915355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.831 qpair failed and we were unable to recover it. 00:28:12.831 [2024-07-24 22:15:51.915609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.831 [2024-07-24 22:15:51.915623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.831 qpair failed and we were unable to recover it. 00:28:12.831 [2024-07-24 22:15:51.915805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.831 [2024-07-24 22:15:51.915818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.831 qpair failed and we were unable to recover it. 00:28:12.831 [2024-07-24 22:15:51.916000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.831 [2024-07-24 22:15:51.916015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.831 qpair failed and we were unable to recover it. 00:28:12.831 [2024-07-24 22:15:51.916329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.831 [2024-07-24 22:15:51.916344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.831 qpair failed and we were unable to recover it. 00:28:12.831 [2024-07-24 22:15:51.916542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.831 [2024-07-24 22:15:51.916557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.831 qpair failed and we were unable to recover it. 00:28:12.831 [2024-07-24 22:15:51.917057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.831 [2024-07-24 22:15:51.917075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.831 qpair failed and we were unable to recover it. 00:28:12.831 [2024-07-24 22:15:51.917373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.831 [2024-07-24 22:15:51.917386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.831 qpair failed and we were unable to recover it. 00:28:12.831 [2024-07-24 22:15:51.917578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.831 [2024-07-24 22:15:51.917590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.831 qpair failed and we were unable to recover it. 00:28:12.831 [2024-07-24 22:15:51.917838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.831 [2024-07-24 22:15:51.917850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.831 qpair failed and we were unable to recover it. 00:28:12.831 [2024-07-24 22:15:51.918181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.831 [2024-07-24 22:15:51.918195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.831 qpair failed and we were unable to recover it. 00:28:12.831 [2024-07-24 22:15:51.918371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.831 [2024-07-24 22:15:51.918384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.831 qpair failed and we were unable to recover it. 00:28:12.831 [2024-07-24 22:15:51.918672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.831 [2024-07-24 22:15:51.918684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.831 qpair failed and we were unable to recover it. 00:28:12.831 [2024-07-24 22:15:51.918926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.831 [2024-07-24 22:15:51.918939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.831 qpair failed and we were unable to recover it. 00:28:12.831 [2024-07-24 22:15:51.919234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.831 [2024-07-24 22:15:51.919247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.831 qpair failed and we were unable to recover it. 00:28:12.831 [2024-07-24 22:15:51.919530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.831 [2024-07-24 22:15:51.919544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.831 qpair failed and we were unable to recover it. 00:28:12.831 [2024-07-24 22:15:51.919791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.831 [2024-07-24 22:15:51.919804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.831 qpair failed and we were unable to recover it. 00:28:12.831 [2024-07-24 22:15:51.920045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.831 [2024-07-24 22:15:51.920058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.831 qpair failed and we were unable to recover it. 00:28:12.831 [2024-07-24 22:15:51.920258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.831 [2024-07-24 22:15:51.920270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.831 qpair failed and we were unable to recover it. 00:28:12.831 [2024-07-24 22:15:51.920572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.831 [2024-07-24 22:15:51.920584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.831 qpair failed and we were unable to recover it. 00:28:12.831 [2024-07-24 22:15:51.920816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.831 [2024-07-24 22:15:51.920829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.831 qpair failed and we were unable to recover it. 00:28:12.831 [2024-07-24 22:15:51.921089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.831 [2024-07-24 22:15:51.921101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.831 qpair failed and we were unable to recover it. 00:28:12.831 [2024-07-24 22:15:51.921280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.831 [2024-07-24 22:15:51.921293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.831 qpair failed and we were unable to recover it. 00:28:12.831 [2024-07-24 22:15:51.921532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.831 [2024-07-24 22:15:51.921545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.831 qpair failed and we were unable to recover it. 00:28:12.831 [2024-07-24 22:15:51.921772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.831 [2024-07-24 22:15:51.921785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.831 qpair failed and we were unable to recover it. 00:28:12.831 [2024-07-24 22:15:51.921964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.831 [2024-07-24 22:15:51.921975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.831 qpair failed and we were unable to recover it. 00:28:12.831 [2024-07-24 22:15:51.922225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.832 [2024-07-24 22:15:51.922237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.832 qpair failed and we were unable to recover it. 00:28:12.832 [2024-07-24 22:15:51.922567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.832 [2024-07-24 22:15:51.922579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.832 qpair failed and we were unable to recover it. 00:28:12.832 [2024-07-24 22:15:51.922813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.832 [2024-07-24 22:15:51.922826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.832 qpair failed and we were unable to recover it. 00:28:12.832 [2024-07-24 22:15:51.923117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.832 [2024-07-24 22:15:51.923129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.832 qpair failed and we were unable to recover it. 00:28:12.832 [2024-07-24 22:15:51.923374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.832 [2024-07-24 22:15:51.923413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.832 qpair failed and we were unable to recover it. 00:28:12.832 [2024-07-24 22:15:51.923707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.832 [2024-07-24 22:15:51.923759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.832 qpair failed and we were unable to recover it. 00:28:12.832 [2024-07-24 22:15:51.924073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.832 [2024-07-24 22:15:51.924117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.832 qpair failed and we were unable to recover it. 00:28:12.832 [2024-07-24 22:15:51.924464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.832 [2024-07-24 22:15:51.924477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.832 qpair failed and we were unable to recover it. 00:28:12.832 [2024-07-24 22:15:51.924789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.832 [2024-07-24 22:15:51.924816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.832 qpair failed and we were unable to recover it. 00:28:12.832 [2024-07-24 22:15:51.925004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.832 [2024-07-24 22:15:51.925019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.832 qpair failed and we were unable to recover it. 00:28:12.832 [2024-07-24 22:15:51.925208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.832 [2024-07-24 22:15:51.925221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.832 qpair failed and we were unable to recover it. 00:28:12.832 [2024-07-24 22:15:51.925389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.832 [2024-07-24 22:15:51.925401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.832 qpair failed and we were unable to recover it. 00:28:12.832 [2024-07-24 22:15:51.925630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.832 [2024-07-24 22:15:51.925642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.832 qpair failed and we were unable to recover it. 00:28:12.832 [2024-07-24 22:15:51.925883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.832 [2024-07-24 22:15:51.925903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.832 qpair failed and we were unable to recover it. 00:28:12.832 [2024-07-24 22:15:51.926224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.832 [2024-07-24 22:15:51.926237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.832 qpair failed and we were unable to recover it. 00:28:12.832 [2024-07-24 22:15:51.926460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.832 [2024-07-24 22:15:51.926471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.832 qpair failed and we were unable to recover it. 00:28:12.832 [2024-07-24 22:15:51.926654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.832 [2024-07-24 22:15:51.926666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.832 qpair failed and we were unable to recover it. 00:28:12.832 [2024-07-24 22:15:51.926850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.832 [2024-07-24 22:15:51.926862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.832 qpair failed and we were unable to recover it. 00:28:12.832 [2024-07-24 22:15:51.927111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.832 [2024-07-24 22:15:51.927123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.832 qpair failed and we were unable to recover it. 00:28:12.832 [2024-07-24 22:15:51.927338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.832 [2024-07-24 22:15:51.927352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.832 qpair failed and we were unable to recover it. 00:28:12.832 [2024-07-24 22:15:51.927583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.832 [2024-07-24 22:15:51.927595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.832 qpair failed and we were unable to recover it. 00:28:12.832 [2024-07-24 22:15:51.927829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.832 [2024-07-24 22:15:51.927870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.832 qpair failed and we were unable to recover it. 00:28:12.832 [2024-07-24 22:15:51.928119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.832 [2024-07-24 22:15:51.928159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.832 qpair failed and we were unable to recover it. 00:28:12.832 [2024-07-24 22:15:51.928544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.832 [2024-07-24 22:15:51.928584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.832 qpair failed and we were unable to recover it. 00:28:12.832 [2024-07-24 22:15:51.928873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.832 [2024-07-24 22:15:51.928913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.832 qpair failed and we were unable to recover it. 00:28:12.832 [2024-07-24 22:15:51.929239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.832 [2024-07-24 22:15:51.929278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.832 qpair failed and we were unable to recover it. 00:28:12.832 [2024-07-24 22:15:51.929625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.832 [2024-07-24 22:15:51.929637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.832 qpair failed and we were unable to recover it. 00:28:12.832 [2024-07-24 22:15:51.929895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.832 [2024-07-24 22:15:51.929907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.832 qpair failed and we were unable to recover it. 00:28:12.832 [2024-07-24 22:15:51.930127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.832 [2024-07-24 22:15:51.930140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.832 qpair failed and we were unable to recover it. 00:28:12.832 [2024-07-24 22:15:51.930372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.832 [2024-07-24 22:15:51.930384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.832 qpair failed and we were unable to recover it. 00:28:12.832 [2024-07-24 22:15:51.930661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.832 [2024-07-24 22:15:51.930673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.832 qpair failed and we were unable to recover it. 00:28:12.832 [2024-07-24 22:15:51.930891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.832 [2024-07-24 22:15:51.930904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.832 qpair failed and we were unable to recover it. 00:28:12.832 [2024-07-24 22:15:51.931141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.832 [2024-07-24 22:15:51.931153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.832 qpair failed and we were unable to recover it. 00:28:12.832 [2024-07-24 22:15:51.931415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.832 [2024-07-24 22:15:51.931445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.832 qpair failed and we were unable to recover it. 00:28:12.832 [2024-07-24 22:15:51.931843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.832 [2024-07-24 22:15:51.931884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.832 qpair failed and we were unable to recover it. 00:28:12.832 [2024-07-24 22:15:51.932263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.832 [2024-07-24 22:15:51.932303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.832 qpair failed and we were unable to recover it. 00:28:12.832 [2024-07-24 22:15:51.932674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.832 [2024-07-24 22:15:51.932713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.832 qpair failed and we were unable to recover it. 00:28:12.833 [2024-07-24 22:15:51.933103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.833 [2024-07-24 22:15:51.933144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.833 qpair failed and we were unable to recover it. 00:28:12.833 [2024-07-24 22:15:51.933479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.833 [2024-07-24 22:15:51.933519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.833 qpair failed and we were unable to recover it. 00:28:12.833 [2024-07-24 22:15:51.933893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.833 [2024-07-24 22:15:51.933944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.833 qpair failed and we were unable to recover it. 00:28:12.833 [2024-07-24 22:15:51.934163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.833 [2024-07-24 22:15:51.934175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.833 qpair failed and we were unable to recover it. 00:28:12.833 [2024-07-24 22:15:51.934495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.833 [2024-07-24 22:15:51.934535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.833 qpair failed and we were unable to recover it. 00:28:12.833 [2024-07-24 22:15:51.934896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.833 [2024-07-24 22:15:51.934938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.833 qpair failed and we were unable to recover it. 00:28:12.833 [2024-07-24 22:15:51.935295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.833 [2024-07-24 22:15:51.935335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.833 qpair failed and we were unable to recover it. 00:28:12.833 [2024-07-24 22:15:51.935691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.833 [2024-07-24 22:15:51.935749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.833 qpair failed and we were unable to recover it. 00:28:12.833 [2024-07-24 22:15:51.936036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.833 [2024-07-24 22:15:51.936048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.833 qpair failed and we were unable to recover it. 00:28:12.833 [2024-07-24 22:15:51.936335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.833 [2024-07-24 22:15:51.936387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.833 qpair failed and we were unable to recover it. 00:28:12.833 [2024-07-24 22:15:51.936757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.833 [2024-07-24 22:15:51.936798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.833 qpair failed and we were unable to recover it. 00:28:12.833 [2024-07-24 22:15:51.937172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.833 [2024-07-24 22:15:51.937213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.833 qpair failed and we were unable to recover it. 00:28:12.833 [2024-07-24 22:15:51.937511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.833 [2024-07-24 22:15:51.937551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.833 qpair failed and we were unable to recover it. 00:28:12.833 [2024-07-24 22:15:51.937786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.833 [2024-07-24 22:15:51.937827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.833 qpair failed and we were unable to recover it. 00:28:12.833 [2024-07-24 22:15:51.938181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.833 [2024-07-24 22:15:51.938221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.833 qpair failed and we were unable to recover it. 00:28:12.833 [2024-07-24 22:15:51.938553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.833 [2024-07-24 22:15:51.938593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.833 qpair failed and we were unable to recover it. 00:28:12.833 [2024-07-24 22:15:51.938917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.833 [2024-07-24 22:15:51.938958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.833 qpair failed and we were unable to recover it. 00:28:12.833 [2024-07-24 22:15:51.939270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.833 [2024-07-24 22:15:51.939310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.833 qpair failed and we were unable to recover it. 00:28:12.833 [2024-07-24 22:15:51.939683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.833 [2024-07-24 22:15:51.939736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.833 qpair failed and we were unable to recover it. 00:28:12.833 [2024-07-24 22:15:51.940045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.833 [2024-07-24 22:15:51.940086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.833 qpair failed and we were unable to recover it. 00:28:12.833 [2024-07-24 22:15:51.940455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.833 [2024-07-24 22:15:51.940467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.833 qpair failed and we were unable to recover it. 00:28:12.833 [2024-07-24 22:15:51.940732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.833 [2024-07-24 22:15:51.940761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.833 qpair failed and we were unable to recover it. 00:28:12.833 [2024-07-24 22:15:51.941069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.833 [2024-07-24 22:15:51.941115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.833 qpair failed and we were unable to recover it. 00:28:12.833 [2024-07-24 22:15:51.941495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.833 [2024-07-24 22:15:51.941534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.833 qpair failed and we were unable to recover it. 00:28:12.833 [2024-07-24 22:15:51.941924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.833 [2024-07-24 22:15:51.941966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.833 qpair failed and we were unable to recover it. 00:28:12.833 [2024-07-24 22:15:51.942254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.833 [2024-07-24 22:15:51.942295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.833 qpair failed and we were unable to recover it. 00:28:12.833 [2024-07-24 22:15:51.942644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.833 [2024-07-24 22:15:51.942684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.833 qpair failed and we were unable to recover it. 00:28:12.833 [2024-07-24 22:15:51.943076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.833 [2024-07-24 22:15:51.943116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.833 qpair failed and we were unable to recover it. 00:28:12.833 [2024-07-24 22:15:51.943435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.833 [2024-07-24 22:15:51.943447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.833 qpair failed and we were unable to recover it. 00:28:12.833 [2024-07-24 22:15:51.943777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.833 [2024-07-24 22:15:51.943790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.833 qpair failed and we were unable to recover it. 00:28:12.833 [2024-07-24 22:15:51.944032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.833 [2024-07-24 22:15:51.944072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.833 qpair failed and we were unable to recover it. 00:28:12.833 [2024-07-24 22:15:51.944371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.833 [2024-07-24 22:15:51.944410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.833 qpair failed and we were unable to recover it. 00:28:12.833 [2024-07-24 22:15:51.944761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.833 [2024-07-24 22:15:51.944802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.833 qpair failed and we were unable to recover it. 00:28:12.833 [2024-07-24 22:15:51.945119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.833 [2024-07-24 22:15:51.945160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.833 qpair failed and we were unable to recover it. 00:28:12.833 [2024-07-24 22:15:51.945475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.833 [2024-07-24 22:15:51.945515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.833 qpair failed and we were unable to recover it. 00:28:12.833 [2024-07-24 22:15:51.945836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.833 [2024-07-24 22:15:51.945878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.833 qpair failed and we were unable to recover it. 00:28:12.834 [2024-07-24 22:15:51.946258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.834 [2024-07-24 22:15:51.946298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.834 qpair failed and we were unable to recover it. 00:28:12.834 [2024-07-24 22:15:51.946675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.834 [2024-07-24 22:15:51.946726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.834 qpair failed and we were unable to recover it. 00:28:12.834 [2024-07-24 22:15:51.947098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.834 [2024-07-24 22:15:51.947138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.834 qpair failed and we were unable to recover it. 00:28:12.834 [2024-07-24 22:15:51.947489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.834 [2024-07-24 22:15:51.947528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.834 qpair failed and we were unable to recover it. 00:28:12.834 [2024-07-24 22:15:51.947854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.834 [2024-07-24 22:15:51.947895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.834 qpair failed and we were unable to recover it. 00:28:12.834 [2024-07-24 22:15:51.948282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.834 [2024-07-24 22:15:51.948322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.834 qpair failed and we were unable to recover it. 00:28:12.834 [2024-07-24 22:15:51.948616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.834 [2024-07-24 22:15:51.948655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.834 qpair failed and we were unable to recover it. 00:28:12.834 [2024-07-24 22:15:51.949039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.834 [2024-07-24 22:15:51.949080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.834 qpair failed and we were unable to recover it. 00:28:12.834 [2024-07-24 22:15:51.949376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.834 [2024-07-24 22:15:51.949416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.834 qpair failed and we were unable to recover it. 00:28:12.834 [2024-07-24 22:15:51.949759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.834 [2024-07-24 22:15:51.949772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.834 qpair failed and we were unable to recover it. 00:28:12.834 [2024-07-24 22:15:51.950037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.834 [2024-07-24 22:15:51.950076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.834 qpair failed and we were unable to recover it. 00:28:12.834 [2024-07-24 22:15:51.950480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.834 [2024-07-24 22:15:51.950520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.834 qpair failed and we were unable to recover it. 00:28:12.834 [2024-07-24 22:15:51.950894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.834 [2024-07-24 22:15:51.950935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.834 qpair failed and we were unable to recover it. 00:28:12.834 [2024-07-24 22:15:51.951319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.834 [2024-07-24 22:15:51.951360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.834 qpair failed and we were unable to recover it. 00:28:12.834 [2024-07-24 22:15:51.951655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.834 [2024-07-24 22:15:51.951667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.834 qpair failed and we were unable to recover it. 00:28:12.834 [2024-07-24 22:15:51.952023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.834 [2024-07-24 22:15:51.952064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.834 qpair failed and we were unable to recover it. 00:28:12.834 [2024-07-24 22:15:51.952438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.834 [2024-07-24 22:15:51.952477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.834 qpair failed and we were unable to recover it. 00:28:12.834 [2024-07-24 22:15:51.952832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.834 [2024-07-24 22:15:51.952873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.834 qpair failed and we were unable to recover it. 00:28:12.834 [2024-07-24 22:15:51.953257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.834 [2024-07-24 22:15:51.953297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.834 qpair failed and we were unable to recover it. 00:28:12.834 [2024-07-24 22:15:51.953583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.834 [2024-07-24 22:15:51.953594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.834 qpair failed and we were unable to recover it. 00:28:12.834 [2024-07-24 22:15:51.953846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.834 [2024-07-24 22:15:51.953858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.834 qpair failed and we were unable to recover it. 00:28:12.834 [2024-07-24 22:15:51.954100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.834 [2024-07-24 22:15:51.954112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.834 qpair failed and we were unable to recover it. 00:28:12.834 [2024-07-24 22:15:51.954352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.834 [2024-07-24 22:15:51.954364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.834 qpair failed and we were unable to recover it. 00:28:12.834 [2024-07-24 22:15:51.954662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.834 [2024-07-24 22:15:51.954701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.834 qpair failed and we were unable to recover it. 00:28:12.834 [2024-07-24 22:15:51.955090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.834 [2024-07-24 22:15:51.955131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.834 qpair failed and we were unable to recover it. 00:28:12.834 [2024-07-24 22:15:51.955396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.834 [2024-07-24 22:15:51.955407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.834 qpair failed and we were unable to recover it. 00:28:12.834 [2024-07-24 22:15:51.955735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.834 [2024-07-24 22:15:51.955782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.834 qpair failed and we were unable to recover it. 00:28:12.834 [2024-07-24 22:15:51.956156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.834 [2024-07-24 22:15:51.956195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.834 qpair failed and we were unable to recover it. 00:28:12.834 [2024-07-24 22:15:51.956479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.834 [2024-07-24 22:15:51.956519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.834 qpair failed and we were unable to recover it. 00:28:12.834 [2024-07-24 22:15:51.956805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.834 [2024-07-24 22:15:51.956846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.834 qpair failed and we were unable to recover it. 00:28:12.834 [2024-07-24 22:15:51.957179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.834 [2024-07-24 22:15:51.957219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.834 qpair failed and we were unable to recover it. 00:28:12.834 [2024-07-24 22:15:51.957604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.834 [2024-07-24 22:15:51.957615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.834 qpair failed and we were unable to recover it. 00:28:12.834 [2024-07-24 22:15:51.957842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.834 [2024-07-24 22:15:51.957883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.834 qpair failed and we were unable to recover it. 00:28:12.834 [2024-07-24 22:15:51.958228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.834 [2024-07-24 22:15:51.958240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.834 qpair failed and we were unable to recover it. 00:28:12.834 [2024-07-24 22:15:51.958640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.834 [2024-07-24 22:15:51.958680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.834 qpair failed and we were unable to recover it. 00:28:12.834 [2024-07-24 22:15:51.959013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.834 [2024-07-24 22:15:51.959053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.834 qpair failed and we were unable to recover it. 00:28:12.835 [2024-07-24 22:15:51.959357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.835 [2024-07-24 22:15:51.959407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.835 qpair failed and we were unable to recover it. 00:28:12.835 [2024-07-24 22:15:51.959624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.835 [2024-07-24 22:15:51.959636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.835 qpair failed and we were unable to recover it. 00:28:12.835 [2024-07-24 22:15:51.959966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.835 [2024-07-24 22:15:51.960008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.835 qpair failed and we were unable to recover it. 00:28:12.835 [2024-07-24 22:15:51.960404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.835 [2024-07-24 22:15:51.960444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.835 qpair failed and we were unable to recover it. 00:28:12.835 [2024-07-24 22:15:51.960726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.835 [2024-07-24 22:15:51.960738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.835 qpair failed and we were unable to recover it. 00:28:12.835 [2024-07-24 22:15:51.960972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.835 [2024-07-24 22:15:51.960984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.835 qpair failed and we were unable to recover it. 00:28:12.835 [2024-07-24 22:15:51.961300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.835 [2024-07-24 22:15:51.961341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.835 qpair failed and we were unable to recover it. 00:28:12.835 [2024-07-24 22:15:51.961730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.835 [2024-07-24 22:15:51.961770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.835 qpair failed and we were unable to recover it. 00:28:12.835 [2024-07-24 22:15:51.962075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.835 [2024-07-24 22:15:51.962116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.835 qpair failed and we were unable to recover it. 00:28:12.835 [2024-07-24 22:15:51.962491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.835 [2024-07-24 22:15:51.962532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.835 qpair failed and we were unable to recover it. 00:28:12.835 [2024-07-24 22:15:51.962902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.835 [2024-07-24 22:15:51.962943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.835 qpair failed and we were unable to recover it. 00:28:12.835 [2024-07-24 22:15:51.963222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.835 [2024-07-24 22:15:51.963262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.835 qpair failed and we were unable to recover it. 00:28:12.835 [2024-07-24 22:15:51.963624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.835 [2024-07-24 22:15:51.963664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.835 qpair failed and we were unable to recover it. 00:28:12.835 [2024-07-24 22:15:51.963967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.835 [2024-07-24 22:15:51.964009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.835 qpair failed and we were unable to recover it. 00:28:12.835 [2024-07-24 22:15:51.964288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.835 [2024-07-24 22:15:51.964328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.835 qpair failed and we were unable to recover it. 00:28:12.835 [2024-07-24 22:15:51.964698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.835 [2024-07-24 22:15:51.964750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.835 qpair failed and we were unable to recover it. 00:28:12.835 [2024-07-24 22:15:51.965052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.835 [2024-07-24 22:15:51.965093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.835 qpair failed and we were unable to recover it. 00:28:12.835 [2024-07-24 22:15:51.965487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.835 [2024-07-24 22:15:51.965528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.835 qpair failed and we were unable to recover it. 00:28:12.835 [2024-07-24 22:15:51.965774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.835 [2024-07-24 22:15:51.965815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.835 qpair failed and we were unable to recover it. 00:28:12.835 [2024-07-24 22:15:51.966141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.835 [2024-07-24 22:15:51.966180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.835 qpair failed and we were unable to recover it. 00:28:12.835 [2024-07-24 22:15:51.966557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.835 [2024-07-24 22:15:51.966597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.835 qpair failed and we were unable to recover it. 00:28:12.835 [2024-07-24 22:15:51.966902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.835 [2024-07-24 22:15:51.966943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.835 qpair failed and we were unable to recover it. 00:28:12.835 [2024-07-24 22:15:51.967313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.835 [2024-07-24 22:15:51.967353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.835 qpair failed and we were unable to recover it. 00:28:12.835 [2024-07-24 22:15:51.967649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.835 [2024-07-24 22:15:51.967660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.835 qpair failed and we were unable to recover it. 00:28:12.835 [2024-07-24 22:15:51.967901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.835 [2024-07-24 22:15:51.967913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.835 qpair failed and we were unable to recover it. 00:28:12.835 [2024-07-24 22:15:51.968204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.835 [2024-07-24 22:15:51.968216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.835 qpair failed and we were unable to recover it. 00:28:12.835 [2024-07-24 22:15:51.968397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.835 [2024-07-24 22:15:51.968408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.835 qpair failed and we were unable to recover it. 00:28:12.835 [2024-07-24 22:15:51.968733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.835 [2024-07-24 22:15:51.968774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.835 qpair failed and we were unable to recover it. 00:28:12.835 [2024-07-24 22:15:51.969162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.835 [2024-07-24 22:15:51.969203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.835 qpair failed and we were unable to recover it. 00:28:12.835 [2024-07-24 22:15:51.969579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.835 [2024-07-24 22:15:51.969619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.836 qpair failed and we were unable to recover it. 00:28:12.836 [2024-07-24 22:15:51.970021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.836 [2024-07-24 22:15:51.970067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.836 qpair failed and we were unable to recover it. 00:28:12.836 [2024-07-24 22:15:51.970377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.836 [2024-07-24 22:15:51.970418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.836 qpair failed and we were unable to recover it. 00:28:12.836 [2024-07-24 22:15:51.970784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.836 [2024-07-24 22:15:51.970813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.836 qpair failed and we were unable to recover it. 00:28:12.836 [2024-07-24 22:15:51.971124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.836 [2024-07-24 22:15:51.971164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.836 qpair failed and we were unable to recover it. 00:28:12.836 [2024-07-24 22:15:51.971538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.836 [2024-07-24 22:15:51.971578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.836 qpair failed and we were unable to recover it. 00:28:12.836 [2024-07-24 22:15:51.971949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.836 [2024-07-24 22:15:51.971990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.836 qpair failed and we were unable to recover it. 00:28:12.836 [2024-07-24 22:15:51.972283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.836 [2024-07-24 22:15:51.972323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.836 qpair failed and we were unable to recover it. 00:28:12.836 [2024-07-24 22:15:51.972690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.836 [2024-07-24 22:15:51.972734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.836 qpair failed and we were unable to recover it. 00:28:12.836 [2024-07-24 22:15:51.972987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.836 [2024-07-24 22:15:51.973026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.836 qpair failed and we were unable to recover it. 00:28:12.836 [2024-07-24 22:15:51.973385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.836 [2024-07-24 22:15:51.973425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.836 qpair failed and we were unable to recover it. 00:28:12.836 [2024-07-24 22:15:51.973803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.836 [2024-07-24 22:15:51.973844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.836 qpair failed and we were unable to recover it. 00:28:12.836 [2024-07-24 22:15:51.974226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.836 [2024-07-24 22:15:51.974266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.836 qpair failed and we were unable to recover it. 00:28:12.836 [2024-07-24 22:15:51.974614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.836 [2024-07-24 22:15:51.974655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.836 qpair failed and we were unable to recover it. 00:28:12.836 [2024-07-24 22:15:51.975049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.836 [2024-07-24 22:15:51.975090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.836 qpair failed and we were unable to recover it. 00:28:12.836 [2024-07-24 22:15:51.975467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.836 [2024-07-24 22:15:51.975507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.836 qpair failed and we were unable to recover it. 00:28:12.836 [2024-07-24 22:15:51.975899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.836 [2024-07-24 22:15:51.975941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.836 qpair failed and we were unable to recover it. 00:28:12.836 [2024-07-24 22:15:51.976244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.836 [2024-07-24 22:15:51.976284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.836 qpair failed and we were unable to recover it. 00:28:12.836 [2024-07-24 22:15:51.976653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.836 [2024-07-24 22:15:51.976692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.836 qpair failed and we were unable to recover it. 00:28:12.836 [2024-07-24 22:15:51.977073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.836 [2024-07-24 22:15:51.977114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.836 qpair failed and we were unable to recover it. 00:28:12.836 [2024-07-24 22:15:51.977506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.836 [2024-07-24 22:15:51.977546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.836 qpair failed and we were unable to recover it. 00:28:12.836 [2024-07-24 22:15:51.977865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.836 [2024-07-24 22:15:51.977906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.836 qpair failed and we were unable to recover it. 00:28:12.836 [2024-07-24 22:15:51.978262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.836 [2024-07-24 22:15:51.978301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.836 qpair failed and we were unable to recover it. 00:28:12.836 [2024-07-24 22:15:51.978617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.836 [2024-07-24 22:15:51.978628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.836 qpair failed and we were unable to recover it. 00:28:12.836 [2024-07-24 22:15:51.978918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.836 [2024-07-24 22:15:51.978930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.836 qpair failed and we were unable to recover it. 00:28:12.836 [2024-07-24 22:15:51.979261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.836 [2024-07-24 22:15:51.979300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.836 qpair failed and we were unable to recover it. 00:28:12.836 [2024-07-24 22:15:51.979676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.836 [2024-07-24 22:15:51.979727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.836 qpair failed and we were unable to recover it. 00:28:12.836 [2024-07-24 22:15:51.980133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.836 [2024-07-24 22:15:51.980173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.836 qpair failed and we were unable to recover it. 00:28:12.836 [2024-07-24 22:15:51.980555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.836 [2024-07-24 22:15:51.980596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.836 qpair failed and we were unable to recover it. 00:28:12.836 [2024-07-24 22:15:51.980881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.836 [2024-07-24 22:15:51.980922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.836 qpair failed and we were unable to recover it. 00:28:12.836 [2024-07-24 22:15:51.981294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.836 [2024-07-24 22:15:51.981334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.836 qpair failed and we were unable to recover it. 00:28:12.836 [2024-07-24 22:15:51.981708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.836 [2024-07-24 22:15:51.981757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.836 qpair failed and we were unable to recover it. 00:28:12.836 [2024-07-24 22:15:51.982063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.836 [2024-07-24 22:15:51.982103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.836 qpair failed and we were unable to recover it. 00:28:12.836 [2024-07-24 22:15:51.982440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.836 [2024-07-24 22:15:51.982479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.836 qpair failed and we were unable to recover it. 00:28:12.836 [2024-07-24 22:15:51.982773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.836 [2024-07-24 22:15:51.982815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.836 qpair failed and we were unable to recover it. 00:28:12.836 [2024-07-24 22:15:51.983166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.836 [2024-07-24 22:15:51.983205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.836 qpair failed and we were unable to recover it. 00:28:12.836 [2024-07-24 22:15:51.983592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.836 [2024-07-24 22:15:51.983633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.836 qpair failed and we were unable to recover it. 00:28:12.836 [2024-07-24 22:15:51.984034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.836 [2024-07-24 22:15:51.984075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.837 qpair failed and we were unable to recover it. 00:28:12.837 [2024-07-24 22:15:51.984372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.837 [2024-07-24 22:15:51.984384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.837 qpair failed and we were unable to recover it. 00:28:12.837 [2024-07-24 22:15:51.984695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.837 [2024-07-24 22:15:51.984706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.837 qpair failed and we were unable to recover it. 00:28:12.837 [2024-07-24 22:15:51.984999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.837 [2024-07-24 22:15:51.985011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.837 qpair failed and we were unable to recover it. 00:28:12.837 [2024-07-24 22:15:51.985331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.837 [2024-07-24 22:15:51.985376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.837 qpair failed and we were unable to recover it. 00:28:12.837 [2024-07-24 22:15:51.985753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.837 [2024-07-24 22:15:51.985794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.837 qpair failed and we were unable to recover it. 00:28:12.837 [2024-07-24 22:15:51.986165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.837 [2024-07-24 22:15:51.986204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.837 qpair failed and we were unable to recover it. 00:28:12.837 [2024-07-24 22:15:51.986580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.837 [2024-07-24 22:15:51.986620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.837 qpair failed and we were unable to recover it. 00:28:12.837 [2024-07-24 22:15:51.986937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.837 [2024-07-24 22:15:51.986979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.837 qpair failed and we were unable to recover it. 00:28:12.837 [2024-07-24 22:15:51.987355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.837 [2024-07-24 22:15:51.987394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.837 qpair failed and we were unable to recover it. 00:28:12.837 [2024-07-24 22:15:51.987768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.837 [2024-07-24 22:15:51.987809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.837 qpair failed and we were unable to recover it. 00:28:12.837 [2024-07-24 22:15:51.988188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.837 [2024-07-24 22:15:51.988228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.837 qpair failed and we were unable to recover it. 00:28:12.837 [2024-07-24 22:15:51.988595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.837 [2024-07-24 22:15:51.988607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.837 qpair failed and we were unable to recover it. 00:28:12.837 [2024-07-24 22:15:51.988932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.837 [2024-07-24 22:15:51.988972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.837 qpair failed and we were unable to recover it. 00:28:12.837 [2024-07-24 22:15:51.989327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.837 [2024-07-24 22:15:51.989367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.837 qpair failed and we were unable to recover it. 00:28:12.837 [2024-07-24 22:15:51.989787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.837 [2024-07-24 22:15:51.989828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.837 qpair failed and we were unable to recover it. 00:28:12.837 [2024-07-24 22:15:51.990203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.837 [2024-07-24 22:15:51.990242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.837 qpair failed and we were unable to recover it. 00:28:12.837 [2024-07-24 22:15:51.990613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.837 [2024-07-24 22:15:51.990625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.837 qpair failed and we were unable to recover it. 00:28:12.837 [2024-07-24 22:15:51.990941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.837 [2024-07-24 22:15:51.990953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.837 qpair failed and we were unable to recover it. 00:28:12.837 [2024-07-24 22:15:51.991274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.837 [2024-07-24 22:15:51.991314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.837 qpair failed and we were unable to recover it. 00:28:12.837 [2024-07-24 22:15:51.991665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.837 [2024-07-24 22:15:51.991704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.837 qpair failed and we were unable to recover it. 00:28:12.837 [2024-07-24 22:15:51.992098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.837 [2024-07-24 22:15:51.992138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.837 qpair failed and we were unable to recover it. 00:28:12.837 [2024-07-24 22:15:51.992489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.837 [2024-07-24 22:15:51.992529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.837 qpair failed and we were unable to recover it. 00:28:12.837 [2024-07-24 22:15:51.992920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.837 [2024-07-24 22:15:51.992962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.837 qpair failed and we were unable to recover it. 00:28:12.837 [2024-07-24 22:15:51.993215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.837 [2024-07-24 22:15:51.993255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.837 qpair failed and we were unable to recover it. 00:28:12.837 [2024-07-24 22:15:51.993629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.837 [2024-07-24 22:15:51.993669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.837 qpair failed and we were unable to recover it. 00:28:12.837 [2024-07-24 22:15:51.993989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.837 [2024-07-24 22:15:51.994030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.837 qpair failed and we were unable to recover it. 00:28:12.837 [2024-07-24 22:15:51.994418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.837 [2024-07-24 22:15:51.994459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.837 qpair failed and we were unable to recover it. 00:28:12.837 [2024-07-24 22:15:51.994795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.837 [2024-07-24 22:15:51.994836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.837 qpair failed and we were unable to recover it. 00:28:12.837 [2024-07-24 22:15:51.995215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.837 [2024-07-24 22:15:51.995255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.837 qpair failed and we were unable to recover it. 00:28:12.837 [2024-07-24 22:15:51.995555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.837 [2024-07-24 22:15:51.995595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.837 qpair failed and we were unable to recover it. 00:28:12.837 [2024-07-24 22:15:51.995973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.837 [2024-07-24 22:15:51.996020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.837 qpair failed and we were unable to recover it. 00:28:12.837 [2024-07-24 22:15:51.996376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.837 [2024-07-24 22:15:51.996416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.837 qpair failed and we were unable to recover it. 00:28:12.837 [2024-07-24 22:15:51.996727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.837 [2024-07-24 22:15:51.996767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.837 qpair failed and we were unable to recover it. 00:28:12.837 [2024-07-24 22:15:51.997075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.837 [2024-07-24 22:15:51.997115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.837 qpair failed and we were unable to recover it. 00:28:12.837 [2024-07-24 22:15:51.997496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.837 [2024-07-24 22:15:51.997531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.837 qpair failed and we were unable to recover it. 00:28:12.837 [2024-07-24 22:15:51.997859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.837 [2024-07-24 22:15:51.997900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.837 qpair failed and we were unable to recover it. 00:28:12.837 [2024-07-24 22:15:51.998208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.838 [2024-07-24 22:15:51.998248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.838 qpair failed and we were unable to recover it. 00:28:12.838 [2024-07-24 22:15:51.998633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.838 [2024-07-24 22:15:51.998672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.838 qpair failed and we were unable to recover it. 00:28:12.838 [2024-07-24 22:15:51.999071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.838 [2024-07-24 22:15:51.999112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.838 qpair failed and we were unable to recover it. 00:28:12.838 [2024-07-24 22:15:51.999490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.838 [2024-07-24 22:15:51.999503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.838 qpair failed and we were unable to recover it. 00:28:12.838 [2024-07-24 22:15:51.999689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.838 [2024-07-24 22:15:51.999702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.838 qpair failed and we were unable to recover it. 00:28:12.838 [2024-07-24 22:15:51.999940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.838 [2024-07-24 22:15:51.999953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.838 qpair failed and we were unable to recover it. 00:28:12.838 [2024-07-24 22:15:52.000228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.838 [2024-07-24 22:15:52.000269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.838 qpair failed and we were unable to recover it. 00:28:12.838 [2024-07-24 22:15:52.000632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.838 [2024-07-24 22:15:52.000672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.838 qpair failed and we were unable to recover it. 00:28:12.838 [2024-07-24 22:15:52.001073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.838 [2024-07-24 22:15:52.001114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.838 qpair failed and we were unable to recover it. 00:28:12.838 [2024-07-24 22:15:52.001490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.838 [2024-07-24 22:15:52.001530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.838 qpair failed and we were unable to recover it. 00:28:12.838 [2024-07-24 22:15:52.001887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.838 [2024-07-24 22:15:52.001930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.838 qpair failed and we were unable to recover it. 00:28:12.838 [2024-07-24 22:15:52.002290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.838 [2024-07-24 22:15:52.002330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.838 qpair failed and we were unable to recover it. 00:28:12.838 [2024-07-24 22:15:52.002691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.838 [2024-07-24 22:15:52.002741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.838 qpair failed and we were unable to recover it. 00:28:12.838 [2024-07-24 22:15:52.003041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.838 [2024-07-24 22:15:52.003081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.838 qpair failed and we were unable to recover it. 00:28:12.838 [2024-07-24 22:15:52.003465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.838 [2024-07-24 22:15:52.003505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.838 qpair failed and we were unable to recover it. 00:28:12.838 [2024-07-24 22:15:52.003882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.838 [2024-07-24 22:15:52.003923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.838 qpair failed and we were unable to recover it. 00:28:12.838 [2024-07-24 22:15:52.004302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.838 [2024-07-24 22:15:52.004341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.838 qpair failed and we were unable to recover it. 00:28:12.838 [2024-07-24 22:15:52.004726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.838 [2024-07-24 22:15:52.004768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.838 qpair failed and we were unable to recover it. 00:28:12.838 [2024-07-24 22:15:52.005094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.838 [2024-07-24 22:15:52.005134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.838 qpair failed and we were unable to recover it. 00:28:12.838 [2024-07-24 22:15:52.005473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.838 [2024-07-24 22:15:52.005513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.838 qpair failed and we were unable to recover it. 00:28:12.838 [2024-07-24 22:15:52.005829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.838 [2024-07-24 22:15:52.005871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.838 qpair failed and we were unable to recover it. 00:28:12.838 [2024-07-24 22:15:52.006187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.838 [2024-07-24 22:15:52.006227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.838 qpair failed and we were unable to recover it. 00:28:12.838 [2024-07-24 22:15:52.006508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.838 [2024-07-24 22:15:52.006521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.838 qpair failed and we were unable to recover it. 00:28:12.838 [2024-07-24 22:15:52.006867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.838 [2024-07-24 22:15:52.006908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.838 qpair failed and we were unable to recover it. 00:28:12.838 [2024-07-24 22:15:52.007296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.838 [2024-07-24 22:15:52.007336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.838 qpair failed and we were unable to recover it. 00:28:12.838 [2024-07-24 22:15:52.007636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.838 [2024-07-24 22:15:52.007676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.838 qpair failed and we were unable to recover it. 00:28:12.838 [2024-07-24 22:15:52.008070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.838 [2024-07-24 22:15:52.008111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.838 qpair failed and we were unable to recover it. 00:28:12.838 [2024-07-24 22:15:52.008487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.838 [2024-07-24 22:15:52.008527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.838 qpair failed and we were unable to recover it. 00:28:12.838 [2024-07-24 22:15:52.008881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.838 [2024-07-24 22:15:52.008893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.838 qpair failed and we were unable to recover it. 00:28:12.838 [2024-07-24 22:15:52.009197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.838 [2024-07-24 22:15:52.009209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.838 qpair failed and we were unable to recover it. 00:28:12.838 [2024-07-24 22:15:52.009505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.838 [2024-07-24 22:15:52.009517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.838 qpair failed and we were unable to recover it. 00:28:12.838 [2024-07-24 22:15:52.009848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.838 [2024-07-24 22:15:52.009889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.838 qpair failed and we were unable to recover it. 00:28:12.838 [2024-07-24 22:15:52.010223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.838 [2024-07-24 22:15:52.010263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.838 qpair failed and we were unable to recover it. 00:28:12.838 [2024-07-24 22:15:52.010586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.838 [2024-07-24 22:15:52.010627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.838 qpair failed and we were unable to recover it. 00:28:12.838 [2024-07-24 22:15:52.011018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.838 [2024-07-24 22:15:52.011065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.838 qpair failed and we were unable to recover it. 00:28:12.838 [2024-07-24 22:15:52.011449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.838 [2024-07-24 22:15:52.011490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.838 qpair failed and we were unable to recover it. 00:28:12.838 [2024-07-24 22:15:52.011866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.838 [2024-07-24 22:15:52.011906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.838 qpair failed and we were unable to recover it. 00:28:12.838 [2024-07-24 22:15:52.012279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.839 [2024-07-24 22:15:52.012320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.839 qpair failed and we were unable to recover it. 00:28:12.839 [2024-07-24 22:15:52.012690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.839 [2024-07-24 22:15:52.012702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.839 qpair failed and we were unable to recover it. 00:28:12.839 [2024-07-24 22:15:52.013003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.839 [2024-07-24 22:15:52.013044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.839 qpair failed and we were unable to recover it. 00:28:12.839 [2024-07-24 22:15:52.013400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.839 [2024-07-24 22:15:52.013440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.839 qpair failed and we were unable to recover it. 00:28:12.839 [2024-07-24 22:15:52.013802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.839 [2024-07-24 22:15:52.013815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.839 qpair failed and we were unable to recover it. 00:28:12.839 [2024-07-24 22:15:52.014138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.839 [2024-07-24 22:15:52.014179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.839 qpair failed and we were unable to recover it. 00:28:12.839 [2024-07-24 22:15:52.014550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.839 [2024-07-24 22:15:52.014591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.839 qpair failed and we were unable to recover it. 00:28:12.839 [2024-07-24 22:15:52.014947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.839 [2024-07-24 22:15:52.014988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.839 qpair failed and we were unable to recover it. 00:28:12.839 [2024-07-24 22:15:52.015352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.839 [2024-07-24 22:15:52.015393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.839 qpair failed and we were unable to recover it. 00:28:12.839 [2024-07-24 22:15:52.015751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.839 [2024-07-24 22:15:52.015792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.839 qpair failed and we were unable to recover it. 00:28:12.839 [2024-07-24 22:15:52.016187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.839 [2024-07-24 22:15:52.016228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.839 qpair failed and we were unable to recover it. 00:28:12.839 [2024-07-24 22:15:52.016616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.839 [2024-07-24 22:15:52.016656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.839 qpair failed and we were unable to recover it. 00:28:12.839 [2024-07-24 22:15:52.017061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.839 [2024-07-24 22:15:52.017103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.839 qpair failed and we were unable to recover it. 00:28:12.839 [2024-07-24 22:15:52.017429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.839 [2024-07-24 22:15:52.017470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.839 qpair failed and we were unable to recover it. 00:28:12.839 [2024-07-24 22:15:52.017827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.839 [2024-07-24 22:15:52.017839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.839 qpair failed and we were unable to recover it. 00:28:12.839 [2024-07-24 22:15:52.018154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.839 [2024-07-24 22:15:52.018193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.839 qpair failed and we were unable to recover it. 00:28:12.839 [2024-07-24 22:15:52.018574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.839 [2024-07-24 22:15:52.018615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.839 qpair failed and we were unable to recover it. 00:28:12.839 [2024-07-24 22:15:52.019021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.839 [2024-07-24 22:15:52.019062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.839 qpair failed and we were unable to recover it. 00:28:12.839 [2024-07-24 22:15:52.019310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.839 [2024-07-24 22:15:52.019323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.839 qpair failed and we were unable to recover it. 00:28:12.839 [2024-07-24 22:15:52.019644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.839 [2024-07-24 22:15:52.019684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.839 qpair failed and we were unable to recover it. 00:28:12.839 [2024-07-24 22:15:52.020013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.839 [2024-07-24 22:15:52.020054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.839 qpair failed and we were unable to recover it. 00:28:12.839 [2024-07-24 22:15:52.020440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.839 [2024-07-24 22:15:52.020480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.839 qpair failed and we were unable to recover it. 00:28:12.839 [2024-07-24 22:15:52.020773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.839 [2024-07-24 22:15:52.020786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.839 qpair failed and we were unable to recover it. 00:28:12.839 [2024-07-24 22:15:52.021046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.839 [2024-07-24 22:15:52.021059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.839 qpair failed and we were unable to recover it. 00:28:12.839 [2024-07-24 22:15:52.021486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.839 [2024-07-24 22:15:52.021527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.839 qpair failed and we were unable to recover it. 00:28:12.839 [2024-07-24 22:15:52.021925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.839 [2024-07-24 22:15:52.021938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.839 qpair failed and we were unable to recover it. 00:28:12.839 [2024-07-24 22:15:52.022202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.839 [2024-07-24 22:15:52.022243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.839 qpair failed and we were unable to recover it. 00:28:12.839 [2024-07-24 22:15:52.022626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.839 [2024-07-24 22:15:52.022666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.839 qpair failed and we were unable to recover it. 00:28:12.839 [2024-07-24 22:15:52.023069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.839 [2024-07-24 22:15:52.023111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.839 qpair failed and we were unable to recover it. 00:28:12.839 [2024-07-24 22:15:52.023505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.839 [2024-07-24 22:15:52.023546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.839 qpair failed and we were unable to recover it. 00:28:12.839 [2024-07-24 22:15:52.023928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.839 [2024-07-24 22:15:52.023970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.839 qpair failed and we were unable to recover it. 00:28:12.839 [2024-07-24 22:15:52.024270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.839 [2024-07-24 22:15:52.024309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.839 qpair failed and we were unable to recover it. 00:28:12.839 [2024-07-24 22:15:52.024691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.839 [2024-07-24 22:15:52.024741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.839 qpair failed and we were unable to recover it. 00:28:12.839 [2024-07-24 22:15:52.025104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.839 [2024-07-24 22:15:52.025146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.839 qpair failed and we were unable to recover it. 00:28:12.839 [2024-07-24 22:15:52.025380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.839 [2024-07-24 22:15:52.025420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.839 qpair failed and we were unable to recover it. 00:28:12.839 [2024-07-24 22:15:52.025803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.839 [2024-07-24 22:15:52.025845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.839 qpair failed and we were unable to recover it. 00:28:12.839 [2024-07-24 22:15:52.026225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.839 [2024-07-24 22:15:52.026265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.839 qpair failed and we were unable to recover it. 00:28:12.839 [2024-07-24 22:15:52.026658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.840 [2024-07-24 22:15:52.026704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.840 qpair failed and we were unable to recover it. 00:28:12.840 [2024-07-24 22:15:52.027015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.840 [2024-07-24 22:15:52.027057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.840 qpair failed and we were unable to recover it. 00:28:12.840 [2024-07-24 22:15:52.027377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.840 [2024-07-24 22:15:52.027417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.840 qpair failed and we were unable to recover it. 00:28:12.840 [2024-07-24 22:15:52.027768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.840 [2024-07-24 22:15:52.027809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.840 qpair failed and we were unable to recover it. 00:28:12.840 [2024-07-24 22:15:52.028204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.840 [2024-07-24 22:15:52.028246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.840 qpair failed and we were unable to recover it. 00:28:12.840 [2024-07-24 22:15:52.028602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.840 [2024-07-24 22:15:52.028641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.840 qpair failed and we were unable to recover it. 00:28:12.840 [2024-07-24 22:15:52.029037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.840 [2024-07-24 22:15:52.029078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.840 qpair failed and we were unable to recover it. 00:28:12.840 [2024-07-24 22:15:52.029465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.840 [2024-07-24 22:15:52.029505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.840 qpair failed and we were unable to recover it. 00:28:12.840 [2024-07-24 22:15:52.029895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.840 [2024-07-24 22:15:52.029952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.840 qpair failed and we were unable to recover it. 00:28:12.840 [2024-07-24 22:15:52.030260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.840 [2024-07-24 22:15:52.030301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.840 qpair failed and we were unable to recover it. 00:28:12.840 [2024-07-24 22:15:52.030687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.840 [2024-07-24 22:15:52.030736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.840 qpair failed and we were unable to recover it. 00:28:12.840 [2024-07-24 22:15:52.031118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.840 [2024-07-24 22:15:52.031159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.840 qpair failed and we were unable to recover it. 00:28:12.840 [2024-07-24 22:15:52.031465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.840 [2024-07-24 22:15:52.031506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.840 qpair failed and we were unable to recover it. 00:28:12.840 [2024-07-24 22:15:52.031886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.840 [2024-07-24 22:15:52.031927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.840 qpair failed and we were unable to recover it. 00:28:12.840 [2024-07-24 22:15:52.032287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.840 [2024-07-24 22:15:52.032327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.840 qpair failed and we were unable to recover it. 00:28:12.840 [2024-07-24 22:15:52.032635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.840 [2024-07-24 22:15:52.032676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.840 qpair failed and we were unable to recover it. 00:28:12.840 [2024-07-24 22:15:52.033005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.840 [2024-07-24 22:15:52.033046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.840 qpair failed and we were unable to recover it. 00:28:12.840 [2024-07-24 22:15:52.033425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.840 [2024-07-24 22:15:52.033464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.840 qpair failed and we were unable to recover it. 00:28:12.840 [2024-07-24 22:15:52.033785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.840 [2024-07-24 22:15:52.033827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.840 qpair failed and we were unable to recover it. 00:28:12.840 [2024-07-24 22:15:52.034122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.840 [2024-07-24 22:15:52.034161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.840 qpair failed and we were unable to recover it. 00:28:12.840 [2024-07-24 22:15:52.034536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.840 [2024-07-24 22:15:52.034577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.840 qpair failed and we were unable to recover it. 00:28:12.840 [2024-07-24 22:15:52.034958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.840 [2024-07-24 22:15:52.034999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.840 qpair failed and we were unable to recover it. 00:28:12.840 [2024-07-24 22:15:52.035371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.840 [2024-07-24 22:15:52.035411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.840 qpair failed and we were unable to recover it. 00:28:12.840 [2024-07-24 22:15:52.035686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.840 [2024-07-24 22:15:52.035698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.840 qpair failed and we were unable to recover it. 00:28:12.840 [2024-07-24 22:15:52.036012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.840 [2024-07-24 22:15:52.036024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.840 qpair failed and we were unable to recover it. 00:28:12.840 [2024-07-24 22:15:52.036216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.840 [2024-07-24 22:15:52.036228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.840 qpair failed and we were unable to recover it. 00:28:12.840 [2024-07-24 22:15:52.036455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.840 [2024-07-24 22:15:52.036468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.840 qpair failed and we were unable to recover it. 00:28:12.840 [2024-07-24 22:15:52.036706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.840 [2024-07-24 22:15:52.036756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.840 qpair failed and we were unable to recover it. 00:28:12.840 [2024-07-24 22:15:52.037119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.840 [2024-07-24 22:15:52.037158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.840 qpair failed and we were unable to recover it. 00:28:12.840 [2024-07-24 22:15:52.037543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.840 [2024-07-24 22:15:52.037555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.840 qpair failed and we were unable to recover it. 00:28:12.840 [2024-07-24 22:15:52.037871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.840 [2024-07-24 22:15:52.037884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:12.840 qpair failed and we were unable to recover it. 00:28:13.111 [2024-07-24 22:15:52.038147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.111 [2024-07-24 22:15:52.038160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.111 qpair failed and we were unable to recover it. 00:28:13.111 [2024-07-24 22:15:52.038349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.111 [2024-07-24 22:15:52.038362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.111 qpair failed and we were unable to recover it. 00:28:13.111 [2024-07-24 22:15:52.038664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.111 [2024-07-24 22:15:52.038676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.111 qpair failed and we were unable to recover it. 00:28:13.111 [2024-07-24 22:15:52.038898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.111 [2024-07-24 22:15:52.038911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.111 qpair failed and we were unable to recover it. 00:28:13.111 [2024-07-24 22:15:52.039131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.111 [2024-07-24 22:15:52.039143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.111 qpair failed and we were unable to recover it. 00:28:13.111 [2024-07-24 22:15:52.039432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.111 [2024-07-24 22:15:52.039445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.111 qpair failed and we were unable to recover it. 00:28:13.111 [2024-07-24 22:15:52.039675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.111 [2024-07-24 22:15:52.039687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.111 qpair failed and we were unable to recover it. 00:28:13.111 [2024-07-24 22:15:52.039926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.111 [2024-07-24 22:15:52.039939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.111 qpair failed and we were unable to recover it. 00:28:13.111 [2024-07-24 22:15:52.040247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.111 [2024-07-24 22:15:52.040260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.111 qpair failed and we were unable to recover it. 00:28:13.111 [2024-07-24 22:15:52.040589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.111 [2024-07-24 22:15:52.040643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.111 qpair failed and we were unable to recover it. 00:28:13.111 [2024-07-24 22:15:52.041013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.111 [2024-07-24 22:15:52.041054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.111 qpair failed and we were unable to recover it. 00:28:13.111 [2024-07-24 22:15:52.041414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.111 [2024-07-24 22:15:52.041454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.111 qpair failed and we were unable to recover it. 00:28:13.111 [2024-07-24 22:15:52.041844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.111 [2024-07-24 22:15:52.041885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.111 qpair failed and we were unable to recover it. 00:28:13.111 [2024-07-24 22:15:52.042191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.111 [2024-07-24 22:15:52.042233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.111 qpair failed and we were unable to recover it. 00:28:13.111 [2024-07-24 22:15:52.042533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.111 [2024-07-24 22:15:52.042545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.111 qpair failed and we were unable to recover it. 00:28:13.111 [2024-07-24 22:15:52.042782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.111 [2024-07-24 22:15:52.042794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.111 qpair failed and we were unable to recover it. 00:28:13.111 [2024-07-24 22:15:52.043138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.111 [2024-07-24 22:15:52.043150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.111 qpair failed and we were unable to recover it. 00:28:13.111 [2024-07-24 22:15:52.043406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.111 [2024-07-24 22:15:52.043446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.111 qpair failed and we were unable to recover it. 00:28:13.111 [2024-07-24 22:15:52.043856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.111 [2024-07-24 22:15:52.043898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.111 qpair failed and we were unable to recover it. 00:28:13.111 [2024-07-24 22:15:52.044208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.111 [2024-07-24 22:15:52.044247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.111 qpair failed and we were unable to recover it. 00:28:13.111 [2024-07-24 22:15:52.044566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.111 [2024-07-24 22:15:52.044579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.111 qpair failed and we were unable to recover it. 00:28:13.111 [2024-07-24 22:15:52.044897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.111 [2024-07-24 22:15:52.044939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.111 qpair failed and we were unable to recover it. 00:28:13.111 [2024-07-24 22:15:52.045318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.111 [2024-07-24 22:15:52.045359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.111 qpair failed and we were unable to recover it. 00:28:13.111 [2024-07-24 22:15:52.045736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.111 [2024-07-24 22:15:52.045777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.111 qpair failed and we were unable to recover it. 00:28:13.111 [2024-07-24 22:15:52.046096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.111 [2024-07-24 22:15:52.046136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.111 qpair failed and we were unable to recover it. 00:28:13.111 [2024-07-24 22:15:52.046473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.111 [2024-07-24 22:15:52.046485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.111 qpair failed and we were unable to recover it. 00:28:13.111 [2024-07-24 22:15:52.046880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.111 [2024-07-24 22:15:52.046921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.111 qpair failed and we were unable to recover it. 00:28:13.111 [2024-07-24 22:15:52.047248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.111 [2024-07-24 22:15:52.047289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.111 qpair failed and we were unable to recover it. 00:28:13.111 [2024-07-24 22:15:52.047603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.111 [2024-07-24 22:15:52.047643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.111 qpair failed and we were unable to recover it. 00:28:13.111 [2024-07-24 22:15:52.048051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.111 [2024-07-24 22:15:52.048093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.111 qpair failed and we were unable to recover it. 00:28:13.111 [2024-07-24 22:15:52.048450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.111 [2024-07-24 22:15:52.048489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.111 qpair failed and we were unable to recover it. 00:28:13.111 [2024-07-24 22:15:52.048890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.111 [2024-07-24 22:15:52.048931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.111 qpair failed and we were unable to recover it. 00:28:13.111 [2024-07-24 22:15:52.049309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.111 [2024-07-24 22:15:52.049349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.111 qpair failed and we were unable to recover it. 00:28:13.111 [2024-07-24 22:15:52.049633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.111 [2024-07-24 22:15:52.049644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.112 qpair failed and we were unable to recover it. 00:28:13.112 [2024-07-24 22:15:52.049977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.112 [2024-07-24 22:15:52.050018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.112 qpair failed and we were unable to recover it. 00:28:13.112 [2024-07-24 22:15:52.050411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.112 [2024-07-24 22:15:52.050452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.112 qpair failed and we were unable to recover it. 00:28:13.112 [2024-07-24 22:15:52.050830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.112 [2024-07-24 22:15:52.050871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.112 qpair failed and we were unable to recover it. 00:28:13.112 [2024-07-24 22:15:52.051261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.112 [2024-07-24 22:15:52.051301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.112 qpair failed and we were unable to recover it. 00:28:13.112 [2024-07-24 22:15:52.051581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.112 [2024-07-24 22:15:52.051594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.112 qpair failed and we were unable to recover it. 00:28:13.112 [2024-07-24 22:15:52.051917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.112 [2024-07-24 22:15:52.051930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.112 qpair failed and we were unable to recover it. 00:28:13.112 [2024-07-24 22:15:52.052288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.112 [2024-07-24 22:15:52.052328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.112 qpair failed and we were unable to recover it. 00:28:13.112 [2024-07-24 22:15:52.052727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.112 [2024-07-24 22:15:52.052769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.112 qpair failed and we were unable to recover it. 00:28:13.112 [2024-07-24 22:15:52.053128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.112 [2024-07-24 22:15:52.053169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.112 qpair failed and we were unable to recover it. 00:28:13.112 [2024-07-24 22:15:52.053496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.112 [2024-07-24 22:15:52.053535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.112 qpair failed and we were unable to recover it. 00:28:13.112 [2024-07-24 22:15:52.053805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.112 [2024-07-24 22:15:52.053818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.112 qpair failed and we were unable to recover it. 00:28:13.112 [2024-07-24 22:15:52.054149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.112 [2024-07-24 22:15:52.054189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.112 qpair failed and we were unable to recover it. 00:28:13.112 [2024-07-24 22:15:52.054572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.112 [2024-07-24 22:15:52.054612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.112 qpair failed and we were unable to recover it. 00:28:13.112 [2024-07-24 22:15:52.054898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.112 [2024-07-24 22:15:52.054911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.112 qpair failed and we were unable to recover it. 00:28:13.112 [2024-07-24 22:15:52.055161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.112 [2024-07-24 22:15:52.055173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.112 qpair failed and we were unable to recover it. 00:28:13.112 [2024-07-24 22:15:52.055535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.112 [2024-07-24 22:15:52.055582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.112 qpair failed and we were unable to recover it. 00:28:13.112 [2024-07-24 22:15:52.055957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.112 [2024-07-24 22:15:52.055998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.112 qpair failed and we were unable to recover it. 00:28:13.112 [2024-07-24 22:15:52.056391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.112 [2024-07-24 22:15:52.056431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.112 qpair failed and we were unable to recover it. 00:28:13.112 [2024-07-24 22:15:52.056818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.112 [2024-07-24 22:15:52.056859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.112 qpair failed and we were unable to recover it. 00:28:13.112 [2024-07-24 22:15:52.057178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.112 [2024-07-24 22:15:52.057219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.112 qpair failed and we were unable to recover it. 00:28:13.112 [2024-07-24 22:15:52.057580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.112 [2024-07-24 22:15:52.057620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.112 qpair failed and we were unable to recover it. 00:28:13.112 [2024-07-24 22:15:52.058018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.112 [2024-07-24 22:15:52.058059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.112 qpair failed and we were unable to recover it. 00:28:13.112 [2024-07-24 22:15:52.058312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.112 [2024-07-24 22:15:52.058352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.112 qpair failed and we were unable to recover it. 00:28:13.112 [2024-07-24 22:15:52.058740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.112 [2024-07-24 22:15:52.058782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.112 qpair failed and we were unable to recover it. 00:28:13.112 [2024-07-24 22:15:52.059043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.112 [2024-07-24 22:15:52.059083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.112 qpair failed and we were unable to recover it. 00:28:13.112 [2024-07-24 22:15:52.059469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.112 [2024-07-24 22:15:52.059509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.112 qpair failed and we were unable to recover it. 00:28:13.112 [2024-07-24 22:15:52.059875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.112 [2024-07-24 22:15:52.059888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.112 qpair failed and we were unable to recover it. 00:28:13.112 [2024-07-24 22:15:52.060221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.112 [2024-07-24 22:15:52.060233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.112 qpair failed and we were unable to recover it. 00:28:13.112 [2024-07-24 22:15:52.060550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.112 [2024-07-24 22:15:52.060590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.112 qpair failed and we were unable to recover it. 00:28:13.112 [2024-07-24 22:15:52.060960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.112 [2024-07-24 22:15:52.061001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.112 qpair failed and we were unable to recover it. 00:28:13.112 [2024-07-24 22:15:52.061387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.112 [2024-07-24 22:15:52.061427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.112 qpair failed and we were unable to recover it. 00:28:13.112 [2024-07-24 22:15:52.061764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.112 [2024-07-24 22:15:52.061806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.112 qpair failed and we were unable to recover it. 00:28:13.112 [2024-07-24 22:15:52.062172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.112 [2024-07-24 22:15:52.062212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.112 qpair failed and we were unable to recover it. 00:28:13.112 [2024-07-24 22:15:52.062597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.112 [2024-07-24 22:15:52.062636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.112 qpair failed and we were unable to recover it. 00:28:13.112 [2024-07-24 22:15:52.063032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.112 [2024-07-24 22:15:52.063075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.112 qpair failed and we were unable to recover it. 00:28:13.112 [2024-07-24 22:15:52.063415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.112 [2024-07-24 22:15:52.063455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.112 qpair failed and we were unable to recover it. 00:28:13.112 [2024-07-24 22:15:52.063861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.113 [2024-07-24 22:15:52.063902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.113 qpair failed and we were unable to recover it. 00:28:13.113 [2024-07-24 22:15:52.064206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.113 [2024-07-24 22:15:52.064246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.113 qpair failed and we were unable to recover it. 00:28:13.113 [2024-07-24 22:15:52.064563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.113 [2024-07-24 22:15:52.064604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.113 qpair failed and we were unable to recover it. 00:28:13.113 [2024-07-24 22:15:52.064998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.113 [2024-07-24 22:15:52.065040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.113 qpair failed and we were unable to recover it. 00:28:13.113 [2024-07-24 22:15:52.065420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.113 [2024-07-24 22:15:52.065461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.113 qpair failed and we were unable to recover it. 00:28:13.113 [2024-07-24 22:15:52.065766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.113 [2024-07-24 22:15:52.065806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.113 qpair failed and we were unable to recover it. 00:28:13.113 [2024-07-24 22:15:52.066199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.113 [2024-07-24 22:15:52.066239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.113 qpair failed and we were unable to recover it. 00:28:13.113 [2024-07-24 22:15:52.066619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.113 [2024-07-24 22:15:52.066659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.113 qpair failed and we were unable to recover it. 00:28:13.113 [2024-07-24 22:15:52.067036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.113 [2024-07-24 22:15:52.067077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.113 qpair failed and we were unable to recover it. 00:28:13.113 [2024-07-24 22:15:52.067391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.113 [2024-07-24 22:15:52.067431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.113 qpair failed and we were unable to recover it. 00:28:13.113 [2024-07-24 22:15:52.067747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.113 [2024-07-24 22:15:52.067789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.113 qpair failed and we were unable to recover it. 00:28:13.113 [2024-07-24 22:15:52.068184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.113 [2024-07-24 22:15:52.068224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.113 qpair failed and we were unable to recover it. 00:28:13.113 [2024-07-24 22:15:52.068613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.113 [2024-07-24 22:15:52.068653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.113 qpair failed and we were unable to recover it. 00:28:13.113 [2024-07-24 22:15:52.069048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.113 [2024-07-24 22:15:52.069089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.113 qpair failed and we were unable to recover it. 00:28:13.113 [2024-07-24 22:15:52.069450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.113 [2024-07-24 22:15:52.069489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.113 qpair failed and we were unable to recover it. 00:28:13.113 [2024-07-24 22:15:52.069889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.113 [2024-07-24 22:15:52.069930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.113 qpair failed and we were unable to recover it. 00:28:13.113 [2024-07-24 22:15:52.070312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.113 [2024-07-24 22:15:52.070351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.113 qpair failed and we were unable to recover it. 00:28:13.113 [2024-07-24 22:15:52.070698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.113 [2024-07-24 22:15:52.070750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.113 qpair failed and we were unable to recover it. 00:28:13.113 [2024-07-24 22:15:52.071069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.113 [2024-07-24 22:15:52.071109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.113 qpair failed and we were unable to recover it. 00:28:13.113 [2024-07-24 22:15:52.071404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.113 [2024-07-24 22:15:52.071450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.113 qpair failed and we were unable to recover it. 00:28:13.113 [2024-07-24 22:15:52.071818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.113 [2024-07-24 22:15:52.071860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.113 qpair failed and we were unable to recover it. 00:28:13.113 [2024-07-24 22:15:52.072162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.113 [2024-07-24 22:15:52.072203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.113 qpair failed and we were unable to recover it. 00:28:13.113 [2024-07-24 22:15:52.072509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.113 [2024-07-24 22:15:52.072521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.113 qpair failed and we were unable to recover it. 00:28:13.113 [2024-07-24 22:15:52.072823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.113 [2024-07-24 22:15:52.072836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.113 qpair failed and we were unable to recover it. 00:28:13.113 [2024-07-24 22:15:52.073168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.113 [2024-07-24 22:15:52.073208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.113 qpair failed and we were unable to recover it. 00:28:13.113 [2024-07-24 22:15:52.073591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.113 [2024-07-24 22:15:52.073632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.113 qpair failed and we were unable to recover it. 00:28:13.113 [2024-07-24 22:15:52.073977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.113 [2024-07-24 22:15:52.074019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.113 qpair failed and we were unable to recover it. 00:28:13.113 [2024-07-24 22:15:52.074420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.113 [2024-07-24 22:15:52.074460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.113 qpair failed and we were unable to recover it. 00:28:13.113 [2024-07-24 22:15:52.074851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.113 [2024-07-24 22:15:52.074891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.113 qpair failed and we were unable to recover it. 00:28:13.113 [2024-07-24 22:15:52.075275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.113 [2024-07-24 22:15:52.075316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.113 qpair failed and we were unable to recover it. 00:28:13.113 [2024-07-24 22:15:52.075598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.113 [2024-07-24 22:15:52.075611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.113 qpair failed and we were unable to recover it. 00:28:13.113 [2024-07-24 22:15:52.075770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.113 [2024-07-24 22:15:52.075783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.113 qpair failed and we were unable to recover it. 00:28:13.113 [2024-07-24 22:15:52.076085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.113 [2024-07-24 22:15:52.076097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.113 qpair failed and we were unable to recover it. 00:28:13.113 [2024-07-24 22:15:52.076456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.113 [2024-07-24 22:15:52.076496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.113 qpair failed and we were unable to recover it. 00:28:13.113 [2024-07-24 22:15:52.076815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.113 [2024-07-24 22:15:52.076856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.113 qpair failed and we were unable to recover it. 00:28:13.113 [2024-07-24 22:15:52.077243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.113 [2024-07-24 22:15:52.077284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.113 qpair failed and we were unable to recover it. 00:28:13.113 [2024-07-24 22:15:52.077663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.114 [2024-07-24 22:15:52.077703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.114 qpair failed and we were unable to recover it. 00:28:13.114 [2024-07-24 22:15:52.078084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.114 [2024-07-24 22:15:52.078124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.114 qpair failed and we were unable to recover it. 00:28:13.114 [2024-07-24 22:15:52.078512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.114 [2024-07-24 22:15:52.078552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.114 qpair failed and we were unable to recover it. 00:28:13.114 [2024-07-24 22:15:52.078938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.114 [2024-07-24 22:15:52.078979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.114 qpair failed and we were unable to recover it. 00:28:13.114 [2024-07-24 22:15:52.079363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.114 [2024-07-24 22:15:52.079403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.114 qpair failed and we were unable to recover it. 00:28:13.114 [2024-07-24 22:15:52.079685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.114 [2024-07-24 22:15:52.079697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.114 qpair failed and we were unable to recover it. 00:28:13.114 [2024-07-24 22:15:52.079971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.114 [2024-07-24 22:15:52.080018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.114 qpair failed and we were unable to recover it. 00:28:13.114 [2024-07-24 22:15:52.080382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.114 [2024-07-24 22:15:52.080423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.114 qpair failed and we were unable to recover it. 00:28:13.114 [2024-07-24 22:15:52.080823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.114 [2024-07-24 22:15:52.080864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.114 qpair failed and we were unable to recover it. 00:28:13.114 [2024-07-24 22:15:52.081168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.114 [2024-07-24 22:15:52.081208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.114 qpair failed and we were unable to recover it. 00:28:13.114 [2024-07-24 22:15:52.081594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.114 [2024-07-24 22:15:52.081634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.114 qpair failed and we were unable to recover it. 00:28:13.114 [2024-07-24 22:15:52.082036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.114 [2024-07-24 22:15:52.082079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.114 qpair failed and we were unable to recover it. 00:28:13.114 [2024-07-24 22:15:52.082392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.114 [2024-07-24 22:15:52.082432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.114 qpair failed and we were unable to recover it. 00:28:13.114 [2024-07-24 22:15:52.082815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.114 [2024-07-24 22:15:52.082856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.114 qpair failed and we were unable to recover it. 00:28:13.114 [2024-07-24 22:15:52.083238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.114 [2024-07-24 22:15:52.083279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.114 qpair failed and we were unable to recover it. 00:28:13.114 [2024-07-24 22:15:52.083656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.114 [2024-07-24 22:15:52.083696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.114 qpair failed and we were unable to recover it. 00:28:13.114 [2024-07-24 22:15:52.084041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.114 [2024-07-24 22:15:52.084081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.114 qpair failed and we were unable to recover it. 00:28:13.114 [2024-07-24 22:15:52.084400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.114 [2024-07-24 22:15:52.084440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.114 qpair failed and we were unable to recover it. 00:28:13.114 [2024-07-24 22:15:52.084768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.114 [2024-07-24 22:15:52.084809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.114 qpair failed and we were unable to recover it. 00:28:13.114 [2024-07-24 22:15:52.085108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.114 [2024-07-24 22:15:52.085148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.114 qpair failed and we were unable to recover it. 00:28:13.114 [2024-07-24 22:15:52.085510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.114 [2024-07-24 22:15:52.085550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.114 qpair failed and we were unable to recover it. 00:28:13.114 [2024-07-24 22:15:52.085835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.114 [2024-07-24 22:15:52.085847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.114 qpair failed and we were unable to recover it. 00:28:13.114 [2024-07-24 22:15:52.086188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.114 [2024-07-24 22:15:52.086229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.114 qpair failed and we were unable to recover it. 00:28:13.114 [2024-07-24 22:15:52.086647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.114 [2024-07-24 22:15:52.086694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.114 qpair failed and we were unable to recover it. 00:28:13.114 [2024-07-24 22:15:52.087016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.114 [2024-07-24 22:15:52.087057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.114 qpair failed and we were unable to recover it. 00:28:13.114 [2024-07-24 22:15:52.087442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.114 [2024-07-24 22:15:52.087482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.114 qpair failed and we were unable to recover it. 00:28:13.114 [2024-07-24 22:15:52.087864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.114 [2024-07-24 22:15:52.087905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.114 qpair failed and we were unable to recover it. 00:28:13.114 [2024-07-24 22:15:52.088238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.114 [2024-07-24 22:15:52.088279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.114 qpair failed and we were unable to recover it. 00:28:13.114 [2024-07-24 22:15:52.088667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.114 [2024-07-24 22:15:52.088707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.114 qpair failed and we were unable to recover it. 00:28:13.114 [2024-07-24 22:15:52.089028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.114 [2024-07-24 22:15:52.089069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.114 qpair failed and we were unable to recover it. 00:28:13.114 [2024-07-24 22:15:52.089432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.114 [2024-07-24 22:15:52.089472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.114 qpair failed and we were unable to recover it. 00:28:13.114 [2024-07-24 22:15:52.089864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.114 [2024-07-24 22:15:52.089905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.114 qpair failed and we were unable to recover it. 00:28:13.114 [2024-07-24 22:15:52.090234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.114 [2024-07-24 22:15:52.090274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.114 qpair failed and we were unable to recover it. 00:28:13.115 [2024-07-24 22:15:52.090666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.115 [2024-07-24 22:15:52.090706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.115 qpair failed and we were unable to recover it. 00:28:13.115 [2024-07-24 22:15:52.091020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.115 [2024-07-24 22:15:52.091061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.115 qpair failed and we were unable to recover it. 00:28:13.115 [2024-07-24 22:15:52.091391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.115 [2024-07-24 22:15:52.091431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.115 qpair failed and we were unable to recover it. 00:28:13.115 [2024-07-24 22:15:52.091796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.115 [2024-07-24 22:15:52.091837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.115 qpair failed and we were unable to recover it. 00:28:13.115 [2024-07-24 22:15:52.092225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.115 [2024-07-24 22:15:52.092265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.115 qpair failed and we were unable to recover it. 00:28:13.115 [2024-07-24 22:15:52.092651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.115 [2024-07-24 22:15:52.092691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.115 qpair failed and we were unable to recover it. 00:28:13.115 [2024-07-24 22:15:52.092937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.115 [2024-07-24 22:15:52.092949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.115 qpair failed and we were unable to recover it. 00:28:13.115 [2024-07-24 22:15:52.093205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.115 [2024-07-24 22:15:52.093217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.115 qpair failed and we were unable to recover it. 00:28:13.115 [2024-07-24 22:15:52.093524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.115 [2024-07-24 22:15:52.093563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.115 qpair failed and we were unable to recover it. 00:28:13.115 [2024-07-24 22:15:52.093936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.115 [2024-07-24 22:15:52.093978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.115 qpair failed and we were unable to recover it. 00:28:13.115 [2024-07-24 22:15:52.094366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.115 [2024-07-24 22:15:52.094408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.115 qpair failed and we were unable to recover it. 00:28:13.115 [2024-07-24 22:15:52.094711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.115 [2024-07-24 22:15:52.094727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.115 qpair failed and we were unable to recover it. 00:28:13.115 [2024-07-24 22:15:52.095050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.115 [2024-07-24 22:15:52.095089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.115 qpair failed and we were unable to recover it. 00:28:13.115 [2024-07-24 22:15:52.095466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.115 [2024-07-24 22:15:52.095506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.115 qpair failed and we were unable to recover it. 00:28:13.115 [2024-07-24 22:15:52.095902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.115 [2024-07-24 22:15:52.095943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.115 qpair failed and we were unable to recover it. 00:28:13.115 [2024-07-24 22:15:52.096327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.115 [2024-07-24 22:15:52.096368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.115 qpair failed and we were unable to recover it. 00:28:13.115 [2024-07-24 22:15:52.096759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.115 [2024-07-24 22:15:52.096801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.115 qpair failed and we were unable to recover it. 00:28:13.115 [2024-07-24 22:15:52.097184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.115 [2024-07-24 22:15:52.097224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.115 qpair failed and we were unable to recover it. 00:28:13.115 [2024-07-24 22:15:52.097614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.115 [2024-07-24 22:15:52.097654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.115 qpair failed and we were unable to recover it. 00:28:13.115 [2024-07-24 22:15:52.098049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.115 [2024-07-24 22:15:52.098090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.115 qpair failed and we were unable to recover it. 00:28:13.115 [2024-07-24 22:15:52.098474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.115 [2024-07-24 22:15:52.098514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.115 qpair failed and we were unable to recover it. 00:28:13.115 [2024-07-24 22:15:52.098909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.115 [2024-07-24 22:15:52.098950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.115 qpair failed and we were unable to recover it. 00:28:13.115 [2024-07-24 22:15:52.099338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.115 [2024-07-24 22:15:52.099379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.115 qpair failed and we were unable to recover it. 00:28:13.115 [2024-07-24 22:15:52.099743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.115 [2024-07-24 22:15:52.099785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.116 qpair failed and we were unable to recover it. 00:28:13.116 [2024-07-24 22:15:52.100167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.116 [2024-07-24 22:15:52.100207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.116 qpair failed and we were unable to recover it. 00:28:13.116 [2024-07-24 22:15:52.100589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.116 [2024-07-24 22:15:52.100629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.116 qpair failed and we were unable to recover it. 00:28:13.116 [2024-07-24 22:15:52.101004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.116 [2024-07-24 22:15:52.101045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.116 qpair failed and we were unable to recover it. 00:28:13.116 [2024-07-24 22:15:52.101408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.116 [2024-07-24 22:15:52.101448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.116 qpair failed and we were unable to recover it. 00:28:13.116 [2024-07-24 22:15:52.101842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.116 [2024-07-24 22:15:52.101883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.116 qpair failed and we were unable to recover it. 00:28:13.116 [2024-07-24 22:15:52.102194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.116 [2024-07-24 22:15:52.102234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.116 qpair failed and we were unable to recover it. 00:28:13.116 [2024-07-24 22:15:52.102626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.116 [2024-07-24 22:15:52.102672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.116 qpair failed and we were unable to recover it. 00:28:13.116 [2024-07-24 22:15:52.103067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.116 [2024-07-24 22:15:52.103109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.116 qpair failed and we were unable to recover it. 00:28:13.116 [2024-07-24 22:15:52.103493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.116 [2024-07-24 22:15:52.103532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.116 qpair failed and we were unable to recover it. 00:28:13.116 [2024-07-24 22:15:52.103914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.116 [2024-07-24 22:15:52.103956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.116 qpair failed and we were unable to recover it. 00:28:13.116 [2024-07-24 22:15:52.104347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.116 [2024-07-24 22:15:52.104388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.116 qpair failed and we were unable to recover it. 00:28:13.116 [2024-07-24 22:15:52.104765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.116 [2024-07-24 22:15:52.104778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.116 qpair failed and we were unable to recover it. 00:28:13.116 [2024-07-24 22:15:52.105110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.116 [2024-07-24 22:15:52.105151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.116 qpair failed and we were unable to recover it. 00:28:13.116 [2024-07-24 22:15:52.105466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.116 [2024-07-24 22:15:52.105506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.116 qpair failed and we were unable to recover it. 00:28:13.116 [2024-07-24 22:15:52.105896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.116 [2024-07-24 22:15:52.105937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.116 qpair failed and we were unable to recover it. 00:28:13.116 [2024-07-24 22:15:52.106298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.116 [2024-07-24 22:15:52.106338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.116 qpair failed and we were unable to recover it. 00:28:13.116 [2024-07-24 22:15:52.106738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.117 [2024-07-24 22:15:52.106779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.117 qpair failed and we were unable to recover it. 00:28:13.117 [2024-07-24 22:15:52.107163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.117 [2024-07-24 22:15:52.107204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.117 qpair failed and we were unable to recover it. 00:28:13.117 [2024-07-24 22:15:52.107591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.117 [2024-07-24 22:15:52.107631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.117 qpair failed and we were unable to recover it. 00:28:13.117 [2024-07-24 22:15:52.107949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.117 [2024-07-24 22:15:52.107962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.117 qpair failed and we were unable to recover it. 00:28:13.117 [2024-07-24 22:15:52.108233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.117 [2024-07-24 22:15:52.108245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.117 qpair failed and we were unable to recover it. 00:28:13.117 [2024-07-24 22:15:52.108569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.117 [2024-07-24 22:15:52.108582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.117 qpair failed and we were unable to recover it. 00:28:13.117 [2024-07-24 22:15:52.108855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.117 [2024-07-24 22:15:52.108897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.117 qpair failed and we were unable to recover it. 00:28:13.117 [2024-07-24 22:15:52.109260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.117 [2024-07-24 22:15:52.109310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.117 qpair failed and we were unable to recover it. 00:28:13.117 [2024-07-24 22:15:52.109637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.117 [2024-07-24 22:15:52.109678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.117 qpair failed and we were unable to recover it. 00:28:13.117 [2024-07-24 22:15:52.110091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.117 [2024-07-24 22:15:52.110132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.117 qpair failed and we were unable to recover it. 00:28:13.117 [2024-07-24 22:15:52.110520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.117 [2024-07-24 22:15:52.110560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.117 qpair failed and we were unable to recover it. 00:28:13.117 [2024-07-24 22:15:52.110840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.117 [2024-07-24 22:15:52.110852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.117 qpair failed and we were unable to recover it. 00:28:13.117 [2024-07-24 22:15:52.111178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.117 [2024-07-24 22:15:52.111218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.117 qpair failed and we were unable to recover it. 00:28:13.117 [2024-07-24 22:15:52.111606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.117 [2024-07-24 22:15:52.111646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.117 qpair failed and we were unable to recover it. 00:28:13.117 [2024-07-24 22:15:52.111872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.117 [2024-07-24 22:15:52.111885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.117 qpair failed and we were unable to recover it. 00:28:13.117 [2024-07-24 22:15:52.112233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.117 [2024-07-24 22:15:52.112273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.117 qpair failed and we were unable to recover it. 00:28:13.117 [2024-07-24 22:15:52.112603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.117 [2024-07-24 22:15:52.112643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.117 qpair failed and we were unable to recover it. 00:28:13.117 [2024-07-24 22:15:52.112913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.117 [2024-07-24 22:15:52.112926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.117 qpair failed and we were unable to recover it. 00:28:13.117 [2024-07-24 22:15:52.113251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.117 [2024-07-24 22:15:52.113264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.117 qpair failed and we were unable to recover it. 00:28:13.117 [2024-07-24 22:15:52.113572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.117 [2024-07-24 22:15:52.113612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.117 qpair failed and we were unable to recover it. 00:28:13.117 [2024-07-24 22:15:52.114001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.117 [2024-07-24 22:15:52.114041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.117 qpair failed and we were unable to recover it. 00:28:13.117 [2024-07-24 22:15:52.114377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.117 [2024-07-24 22:15:52.114417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.117 qpair failed and we were unable to recover it. 00:28:13.117 [2024-07-24 22:15:52.114804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.117 [2024-07-24 22:15:52.114845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.117 qpair failed and we were unable to recover it. 00:28:13.117 [2024-07-24 22:15:52.115151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.117 [2024-07-24 22:15:52.115191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.117 qpair failed and we were unable to recover it. 00:28:13.117 [2024-07-24 22:15:52.115574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.118 [2024-07-24 22:15:52.115613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.118 qpair failed and we were unable to recover it. 00:28:13.118 [2024-07-24 22:15:52.115993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.118 [2024-07-24 22:15:52.116005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.118 qpair failed and we were unable to recover it. 00:28:13.118 [2024-07-24 22:15:52.116309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.118 [2024-07-24 22:15:52.116322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.118 qpair failed and we were unable to recover it. 00:28:13.118 [2024-07-24 22:15:52.116550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.118 [2024-07-24 22:15:52.116563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.118 qpair failed and we were unable to recover it. 00:28:13.118 [2024-07-24 22:15:52.116874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.118 [2024-07-24 22:15:52.116915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.118 qpair failed and we were unable to recover it. 00:28:13.118 [2024-07-24 22:15:52.117302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.118 [2024-07-24 22:15:52.117343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.118 qpair failed and we were unable to recover it. 00:28:13.118 [2024-07-24 22:15:52.117737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.118 [2024-07-24 22:15:52.117790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.118 qpair failed and we were unable to recover it. 00:28:13.118 [2024-07-24 22:15:52.118178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.118 [2024-07-24 22:15:52.118218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.118 qpair failed and we were unable to recover it. 00:28:13.118 [2024-07-24 22:15:52.118581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.118 [2024-07-24 22:15:52.118621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.118 qpair failed and we were unable to recover it. 00:28:13.118 [2024-07-24 22:15:52.119012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.118 [2024-07-24 22:15:52.119054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.118 qpair failed and we were unable to recover it. 00:28:13.118 [2024-07-24 22:15:52.119449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.118 [2024-07-24 22:15:52.119489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.118 qpair failed and we were unable to recover it. 00:28:13.118 [2024-07-24 22:15:52.119878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.118 [2024-07-24 22:15:52.119919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.118 qpair failed and we were unable to recover it. 00:28:13.118 [2024-07-24 22:15:52.120304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.118 [2024-07-24 22:15:52.120344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.118 qpair failed and we were unable to recover it. 00:28:13.118 [2024-07-24 22:15:52.120738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.118 [2024-07-24 22:15:52.120780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.118 qpair failed and we were unable to recover it. 00:28:13.118 [2024-07-24 22:15:52.121096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.118 [2024-07-24 22:15:52.121137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.118 qpair failed and we were unable to recover it. 00:28:13.118 [2024-07-24 22:15:52.121523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.118 [2024-07-24 22:15:52.121563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.118 qpair failed and we were unable to recover it. 00:28:13.118 [2024-07-24 22:15:52.121863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.118 [2024-07-24 22:15:52.121876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.118 qpair failed and we were unable to recover it. 00:28:13.118 [2024-07-24 22:15:52.122235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.118 [2024-07-24 22:15:52.122275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.118 qpair failed and we were unable to recover it. 00:28:13.118 [2024-07-24 22:15:52.122601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.118 [2024-07-24 22:15:52.122641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.118 qpair failed and we were unable to recover it. 00:28:13.118 [2024-07-24 22:15:52.122996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.118 [2024-07-24 22:15:52.123009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.118 qpair failed and we were unable to recover it. 00:28:13.118 [2024-07-24 22:15:52.123265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.118 [2024-07-24 22:15:52.123277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.118 qpair failed and we were unable to recover it. 00:28:13.118 [2024-07-24 22:15:52.123632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.118 [2024-07-24 22:15:52.123672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.118 qpair failed and we were unable to recover it. 00:28:13.118 [2024-07-24 22:15:52.123994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.118 [2024-07-24 22:15:52.124036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.118 qpair failed and we were unable to recover it. 00:28:13.118 [2024-07-24 22:15:52.124339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.118 [2024-07-24 22:15:52.124379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.118 qpair failed and we were unable to recover it. 00:28:13.118 [2024-07-24 22:15:52.124741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.118 [2024-07-24 22:15:52.124792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.119 qpair failed and we were unable to recover it. 00:28:13.119 [2024-07-24 22:15:52.125122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.119 [2024-07-24 22:15:52.125134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.119 qpair failed and we were unable to recover it. 00:28:13.119 [2024-07-24 22:15:52.125313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.119 [2024-07-24 22:15:52.125326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.119 qpair failed and we were unable to recover it. 00:28:13.119 [2024-07-24 22:15:52.125572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.119 [2024-07-24 22:15:52.125585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.119 qpair failed and we were unable to recover it. 00:28:13.119 [2024-07-24 22:15:52.125846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.119 [2024-07-24 22:15:52.125859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.119 qpair failed and we were unable to recover it. 00:28:13.119 [2024-07-24 22:15:52.126216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.119 [2024-07-24 22:15:52.126256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.119 qpair failed and we were unable to recover it. 00:28:13.119 [2024-07-24 22:15:52.126651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.119 [2024-07-24 22:15:52.126691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.119 qpair failed and we were unable to recover it. 00:28:13.119 [2024-07-24 22:15:52.127063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.119 [2024-07-24 22:15:52.127103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.119 qpair failed and we were unable to recover it. 00:28:13.119 [2024-07-24 22:15:52.127477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.119 [2024-07-24 22:15:52.127517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.119 qpair failed and we were unable to recover it. 00:28:13.119 [2024-07-24 22:15:52.127883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.119 [2024-07-24 22:15:52.127925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.119 qpair failed and we were unable to recover it. 00:28:13.119 [2024-07-24 22:15:52.128317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.119 [2024-07-24 22:15:52.128358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.119 qpair failed and we were unable to recover it. 00:28:13.119 [2024-07-24 22:15:52.128729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.119 [2024-07-24 22:15:52.128769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.119 qpair failed and we were unable to recover it. 00:28:13.119 [2024-07-24 22:15:52.129151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.119 [2024-07-24 22:15:52.129164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.119 qpair failed and we were unable to recover it. 00:28:13.119 [2024-07-24 22:15:52.129398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.119 [2024-07-24 22:15:52.129410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.119 qpair failed and we were unable to recover it. 00:28:13.119 [2024-07-24 22:15:52.129738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.119 [2024-07-24 22:15:52.129780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.119 qpair failed and we were unable to recover it. 00:28:13.119 [2024-07-24 22:15:52.130142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.119 [2024-07-24 22:15:52.130182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.119 qpair failed and we were unable to recover it. 00:28:13.119 [2024-07-24 22:15:52.130429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.119 [2024-07-24 22:15:52.130469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.119 qpair failed and we were unable to recover it. 00:28:13.119 [2024-07-24 22:15:52.130802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.119 [2024-07-24 22:15:52.130843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.119 qpair failed and we were unable to recover it. 00:28:13.119 [2024-07-24 22:15:52.131260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.119 [2024-07-24 22:15:52.131301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.119 qpair failed and we were unable to recover it. 00:28:13.119 [2024-07-24 22:15:52.131602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.119 [2024-07-24 22:15:52.131643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.119 qpair failed and we were unable to recover it. 00:28:13.119 [2024-07-24 22:15:52.132042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.119 [2024-07-24 22:15:52.132084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.119 qpair failed and we were unable to recover it. 00:28:13.119 [2024-07-24 22:15:52.132413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.119 [2024-07-24 22:15:52.132449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.119 qpair failed and we were unable to recover it. 00:28:13.119 [2024-07-24 22:15:52.132706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.119 [2024-07-24 22:15:52.132725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.119 qpair failed and we were unable to recover it. 00:28:13.119 [2024-07-24 22:15:52.133056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.119 [2024-07-24 22:15:52.133096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.119 qpair failed and we were unable to recover it. 00:28:13.119 [2024-07-24 22:15:52.133483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.119 [2024-07-24 22:15:52.133524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.119 qpair failed and we were unable to recover it. 00:28:13.119 [2024-07-24 22:15:52.133769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.120 [2024-07-24 22:15:52.133783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.120 qpair failed and we were unable to recover it. 00:28:13.120 [2024-07-24 22:15:52.134067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.120 [2024-07-24 22:15:52.134109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.120 qpair failed and we were unable to recover it. 00:28:13.120 [2024-07-24 22:15:52.134496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.120 [2024-07-24 22:15:52.134536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.120 qpair failed and we were unable to recover it. 00:28:13.120 [2024-07-24 22:15:52.134897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.120 [2024-07-24 22:15:52.134910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.120 qpair failed and we were unable to recover it. 00:28:13.120 [2024-07-24 22:15:52.135227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.120 [2024-07-24 22:15:52.135268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.120 qpair failed and we were unable to recover it. 00:28:13.120 [2024-07-24 22:15:52.135605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.120 [2024-07-24 22:15:52.135646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.120 qpair failed and we were unable to recover it. 00:28:13.120 [2024-07-24 22:15:52.135984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.120 [2024-07-24 22:15:52.135997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.120 qpair failed and we were unable to recover it. 00:28:13.120 [2024-07-24 22:15:52.136329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.120 [2024-07-24 22:15:52.136369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.120 qpair failed and we were unable to recover it. 00:28:13.120 [2024-07-24 22:15:52.136736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.120 [2024-07-24 22:15:52.136777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.120 qpair failed and we were unable to recover it. 00:28:13.120 [2024-07-24 22:15:52.137177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.120 [2024-07-24 22:15:52.137217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.120 qpair failed and we were unable to recover it. 00:28:13.120 [2024-07-24 22:15:52.137599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.120 [2024-07-24 22:15:52.137640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.120 qpair failed and we were unable to recover it. 00:28:13.120 [2024-07-24 22:15:52.137970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.120 [2024-07-24 22:15:52.137983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.120 qpair failed and we were unable to recover it. 00:28:13.120 [2024-07-24 22:15:52.138318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.120 [2024-07-24 22:15:52.138358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.120 qpair failed and we were unable to recover it. 00:28:13.120 [2024-07-24 22:15:52.138749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.120 [2024-07-24 22:15:52.138791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.120 qpair failed and we were unable to recover it. 00:28:13.120 [2024-07-24 22:15:52.139050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.120 [2024-07-24 22:15:52.139063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.120 qpair failed and we were unable to recover it. 00:28:13.120 [2024-07-24 22:15:52.139391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.120 [2024-07-24 22:15:52.139431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.120 qpair failed and we were unable to recover it. 00:28:13.120 [2024-07-24 22:15:52.139757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.120 [2024-07-24 22:15:52.139799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.120 qpair failed and we were unable to recover it. 00:28:13.120 [2024-07-24 22:15:52.140095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.120 [2024-07-24 22:15:52.140108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.120 qpair failed and we were unable to recover it. 00:28:13.120 [2024-07-24 22:15:52.140431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.120 [2024-07-24 22:15:52.140444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.120 qpair failed and we were unable to recover it. 00:28:13.120 [2024-07-24 22:15:52.140791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.120 [2024-07-24 22:15:52.140832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.120 qpair failed and we were unable to recover it. 00:28:13.120 [2024-07-24 22:15:52.141214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.120 [2024-07-24 22:15:52.141254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.120 qpair failed and we were unable to recover it. 00:28:13.120 [2024-07-24 22:15:52.141642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.120 [2024-07-24 22:15:52.141682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.120 qpair failed and we were unable to recover it. 00:28:13.120 [2024-07-24 22:15:52.141947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.120 [2024-07-24 22:15:52.141959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.120 qpair failed and we were unable to recover it. 00:28:13.120 [2024-07-24 22:15:52.142290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.120 [2024-07-24 22:15:52.142331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.120 qpair failed and we were unable to recover it. 00:28:13.120 [2024-07-24 22:15:52.142744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.120 [2024-07-24 22:15:52.142786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.120 qpair failed and we were unable to recover it. 00:28:13.121 [2024-07-24 22:15:52.143176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.121 [2024-07-24 22:15:52.143217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.121 qpair failed and we were unable to recover it. 00:28:13.121 [2024-07-24 22:15:52.143544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.121 [2024-07-24 22:15:52.143585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.121 qpair failed and we were unable to recover it. 00:28:13.121 [2024-07-24 22:15:52.143870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.121 [2024-07-24 22:15:52.143883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.121 qpair failed and we were unable to recover it. 00:28:13.121 [2024-07-24 22:15:52.144216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.121 [2024-07-24 22:15:52.144257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.121 qpair failed and we were unable to recover it. 00:28:13.121 [2024-07-24 22:15:52.144657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.121 [2024-07-24 22:15:52.144698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.121 qpair failed and we were unable to recover it. 00:28:13.121 [2024-07-24 22:15:52.144989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.121 [2024-07-24 22:15:52.145003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.121 qpair failed and we were unable to recover it. 00:28:13.121 [2024-07-24 22:15:52.145351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.121 [2024-07-24 22:15:52.145390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.121 qpair failed and we were unable to recover it. 00:28:13.121 [2024-07-24 22:15:52.145675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.121 [2024-07-24 22:15:52.145733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.121 qpair failed and we were unable to recover it. 00:28:13.121 [2024-07-24 22:15:52.146133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.121 [2024-07-24 22:15:52.146174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.121 qpair failed and we were unable to recover it. 00:28:13.121 [2024-07-24 22:15:52.146560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.121 [2024-07-24 22:15:52.146599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.121 qpair failed and we were unable to recover it. 00:28:13.121 [2024-07-24 22:15:52.146867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.121 [2024-07-24 22:15:52.146879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.121 qpair failed and we were unable to recover it. 00:28:13.121 [2024-07-24 22:15:52.147129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.121 [2024-07-24 22:15:52.147142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.121 qpair failed and we were unable to recover it. 00:28:13.121 [2024-07-24 22:15:52.147410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.121 [2024-07-24 22:15:52.147426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.121 qpair failed and we were unable to recover it. 00:28:13.121 [2024-07-24 22:15:52.147739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.121 [2024-07-24 22:15:52.147781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.121 qpair failed and we were unable to recover it. 00:28:13.121 [2024-07-24 22:15:52.148145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.121 [2024-07-24 22:15:52.148185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.121 qpair failed and we were unable to recover it. 00:28:13.121 [2024-07-24 22:15:52.148496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.121 [2024-07-24 22:15:52.148543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.121 qpair failed and we were unable to recover it. 00:28:13.121 [2024-07-24 22:15:52.148876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.121 [2024-07-24 22:15:52.148918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.121 qpair failed and we were unable to recover it. 00:28:13.121 [2024-07-24 22:15:52.149330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.121 [2024-07-24 22:15:52.149371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.121 qpair failed and we were unable to recover it. 00:28:13.121 [2024-07-24 22:15:52.149753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.121 [2024-07-24 22:15:52.149794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.121 qpair failed and we were unable to recover it. 00:28:13.121 [2024-07-24 22:15:52.150177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.121 [2024-07-24 22:15:52.150217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.121 qpair failed and we were unable to recover it. 00:28:13.121 [2024-07-24 22:15:52.150580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.121 [2024-07-24 22:15:52.150621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.121 qpair failed and we were unable to recover it. 00:28:13.121 [2024-07-24 22:15:52.150931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.121 [2024-07-24 22:15:52.150972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.121 qpair failed and we were unable to recover it. 00:28:13.121 [2024-07-24 22:15:52.151307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.121 [2024-07-24 22:15:52.151348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.121 qpair failed and we were unable to recover it. 00:28:13.121 [2024-07-24 22:15:52.151735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.121 [2024-07-24 22:15:52.151776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.121 qpair failed and we were unable to recover it. 00:28:13.121 [2024-07-24 22:15:52.152169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.122 [2024-07-24 22:15:52.152209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.122 qpair failed and we were unable to recover it. 00:28:13.122 [2024-07-24 22:15:52.152610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.122 [2024-07-24 22:15:52.152650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.122 qpair failed and we were unable to recover it. 00:28:13.122 [2024-07-24 22:15:52.153058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.122 [2024-07-24 22:15:52.153098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.122 qpair failed and we were unable to recover it. 00:28:13.122 [2024-07-24 22:15:52.153476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.122 [2024-07-24 22:15:52.153515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.122 qpair failed and we were unable to recover it. 00:28:13.122 [2024-07-24 22:15:52.153859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.122 [2024-07-24 22:15:52.153871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.122 qpair failed and we were unable to recover it. 00:28:13.122 [2024-07-24 22:15:52.154181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.122 [2024-07-24 22:15:52.154221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.122 qpair failed and we were unable to recover it. 00:28:13.122 [2024-07-24 22:15:52.154590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.122 [2024-07-24 22:15:52.154630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.122 qpair failed and we were unable to recover it. 00:28:13.122 [2024-07-24 22:15:52.154950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.122 [2024-07-24 22:15:52.154963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.122 qpair failed and we were unable to recover it. 00:28:13.122 [2024-07-24 22:15:52.155265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.122 [2024-07-24 22:15:52.155277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.122 qpair failed and we were unable to recover it. 00:28:13.122 [2024-07-24 22:15:52.155551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.122 [2024-07-24 22:15:52.155591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.122 qpair failed and we were unable to recover it. 00:28:13.122 [2024-07-24 22:15:52.155984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.122 [2024-07-24 22:15:52.156026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.122 qpair failed and we were unable to recover it. 00:28:13.122 [2024-07-24 22:15:52.156319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.122 [2024-07-24 22:15:52.156359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.122 qpair failed and we were unable to recover it. 00:28:13.122 [2024-07-24 22:15:52.156743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.122 [2024-07-24 22:15:52.156795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.122 qpair failed and we were unable to recover it. 00:28:13.122 [2024-07-24 22:15:52.157112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.122 [2024-07-24 22:15:52.157152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.122 qpair failed and we were unable to recover it. 00:28:13.122 [2024-07-24 22:15:52.157536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.122 [2024-07-24 22:15:52.157576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.122 qpair failed and we were unable to recover it. 00:28:13.122 [2024-07-24 22:15:52.157965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.122 [2024-07-24 22:15:52.158007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.122 qpair failed and we were unable to recover it. 00:28:13.122 [2024-07-24 22:15:52.158314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.122 [2024-07-24 22:15:52.158355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.122 qpair failed and we were unable to recover it. 00:28:13.122 [2024-07-24 22:15:52.158741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.122 [2024-07-24 22:15:52.158782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.122 qpair failed and we were unable to recover it. 00:28:13.122 [2024-07-24 22:15:52.159167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.122 [2024-07-24 22:15:52.159208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.122 qpair failed and we were unable to recover it. 00:28:13.122 [2024-07-24 22:15:52.159581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.122 [2024-07-24 22:15:52.159621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.122 qpair failed and we were unable to recover it. 00:28:13.122 [2024-07-24 22:15:52.160010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.122 [2024-07-24 22:15:52.160023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.122 qpair failed and we were unable to recover it. 00:28:13.122 [2024-07-24 22:15:52.160347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.122 [2024-07-24 22:15:52.160359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.122 qpair failed and we were unable to recover it. 00:28:13.122 [2024-07-24 22:15:52.160728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.122 [2024-07-24 22:15:52.160742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.122 qpair failed and we were unable to recover it. 00:28:13.122 [2024-07-24 22:15:52.160999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.122 [2024-07-24 22:15:52.161012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.122 qpair failed and we were unable to recover it. 00:28:13.122 [2024-07-24 22:15:52.161336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.122 [2024-07-24 22:15:52.161349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.122 qpair failed and we were unable to recover it. 00:28:13.122 [2024-07-24 22:15:52.161672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.122 [2024-07-24 22:15:52.161685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.122 qpair failed and we were unable to recover it. 00:28:13.122 [2024-07-24 22:15:52.162056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.123 [2024-07-24 22:15:52.162069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.123 qpair failed and we were unable to recover it. 00:28:13.123 [2024-07-24 22:15:52.162395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.123 [2024-07-24 22:15:52.162408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.123 qpair failed and we were unable to recover it. 00:28:13.123 [2024-07-24 22:15:52.162654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.123 [2024-07-24 22:15:52.162669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.123 qpair failed and we were unable to recover it. 00:28:13.123 [2024-07-24 22:15:52.162993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.123 [2024-07-24 22:15:52.163006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.123 qpair failed and we were unable to recover it. 00:28:13.123 [2024-07-24 22:15:52.163187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.123 [2024-07-24 22:15:52.163200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.123 qpair failed and we were unable to recover it. 00:28:13.123 [2024-07-24 22:15:52.163397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.123 [2024-07-24 22:15:52.163410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.123 qpair failed and we were unable to recover it. 00:28:13.123 [2024-07-24 22:15:52.163655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.123 [2024-07-24 22:15:52.163668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.123 qpair failed and we were unable to recover it. 00:28:13.123 [2024-07-24 22:15:52.163992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.123 [2024-07-24 22:15:52.164005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.123 qpair failed and we were unable to recover it. 00:28:13.123 [2024-07-24 22:15:52.164184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.123 [2024-07-24 22:15:52.164196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.123 qpair failed and we were unable to recover it. 00:28:13.123 [2024-07-24 22:15:52.164563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.123 [2024-07-24 22:15:52.164577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.123 qpair failed and we were unable to recover it. 00:28:13.123 [2024-07-24 22:15:52.164827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.123 [2024-07-24 22:15:52.164841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.123 qpair failed and we were unable to recover it. 00:28:13.123 [2024-07-24 22:15:52.165170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.123 [2024-07-24 22:15:52.165183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.123 qpair failed and we were unable to recover it. 00:28:13.123 [2024-07-24 22:15:52.165412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.123 [2024-07-24 22:15:52.165425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.123 qpair failed and we were unable to recover it. 00:28:13.123 [2024-07-24 22:15:52.165646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.123 [2024-07-24 22:15:52.165659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.123 qpair failed and we were unable to recover it. 00:28:13.123 [2024-07-24 22:15:52.165988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.123 [2024-07-24 22:15:52.166001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.123 qpair failed and we were unable to recover it. 00:28:13.123 [2024-07-24 22:15:52.166324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.123 [2024-07-24 22:15:52.166336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.123 qpair failed and we were unable to recover it. 00:28:13.123 [2024-07-24 22:15:52.166591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.123 [2024-07-24 22:15:52.166603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.123 qpair failed and we were unable to recover it. 00:28:13.123 [2024-07-24 22:15:52.166907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.123 [2024-07-24 22:15:52.166920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.123 qpair failed and we were unable to recover it. 00:28:13.123 [2024-07-24 22:15:52.167152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.123 [2024-07-24 22:15:52.167164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.123 qpair failed and we were unable to recover it. 00:28:13.124 [2024-07-24 22:15:52.167510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.124 [2024-07-24 22:15:52.167523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.124 qpair failed and we were unable to recover it. 00:28:13.124 [2024-07-24 22:15:52.167774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.124 [2024-07-24 22:15:52.167788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.124 qpair failed and we were unable to recover it. 00:28:13.124 [2024-07-24 22:15:52.168121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.124 [2024-07-24 22:15:52.168134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.124 qpair failed and we were unable to recover it. 00:28:13.124 [2024-07-24 22:15:52.168438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.124 [2024-07-24 22:15:52.168463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.124 qpair failed and we were unable to recover it. 00:28:13.124 [2024-07-24 22:15:52.168785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.124 [2024-07-24 22:15:52.168798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.124 qpair failed and we were unable to recover it. 00:28:13.124 [2024-07-24 22:15:52.169098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.124 [2024-07-24 22:15:52.169111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.124 qpair failed and we were unable to recover it. 00:28:13.124 [2024-07-24 22:15:52.169434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.124 [2024-07-24 22:15:52.169447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.124 qpair failed and we were unable to recover it. 00:28:13.124 [2024-07-24 22:15:52.169754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.124 [2024-07-24 22:15:52.169767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.124 qpair failed and we were unable to recover it. 00:28:13.124 [2024-07-24 22:15:52.170066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.124 [2024-07-24 22:15:52.170078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.124 qpair failed and we were unable to recover it. 00:28:13.124 [2024-07-24 22:15:52.170390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.124 [2024-07-24 22:15:52.170402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.124 qpair failed and we were unable to recover it. 00:28:13.124 [2024-07-24 22:15:52.170657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.124 [2024-07-24 22:15:52.170670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.124 qpair failed and we were unable to recover it. 00:28:13.124 [2024-07-24 22:15:52.170930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.124 [2024-07-24 22:15:52.170944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.124 qpair failed and we were unable to recover it. 00:28:13.124 [2024-07-24 22:15:52.171291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.124 [2024-07-24 22:15:52.171303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.124 qpair failed and we were unable to recover it. 00:28:13.124 [2024-07-24 22:15:52.171643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.124 [2024-07-24 22:15:52.171656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.124 qpair failed and we were unable to recover it. 00:28:13.124 [2024-07-24 22:15:52.171917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.124 [2024-07-24 22:15:52.171930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.124 qpair failed and we were unable to recover it. 00:28:13.124 [2024-07-24 22:15:52.172277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.124 [2024-07-24 22:15:52.172290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.124 qpair failed and we were unable to recover it. 00:28:13.124 [2024-07-24 22:15:52.172549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.124 [2024-07-24 22:15:52.172562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.124 qpair failed and we were unable to recover it. 00:28:13.124 [2024-07-24 22:15:52.172919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.124 [2024-07-24 22:15:52.172932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.124 qpair failed and we were unable to recover it. 00:28:13.124 [2024-07-24 22:15:52.173230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.124 [2024-07-24 22:15:52.173242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.124 qpair failed and we were unable to recover it. 00:28:13.124 [2024-07-24 22:15:52.173474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.124 [2024-07-24 22:15:52.173487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.124 qpair failed and we were unable to recover it. 00:28:13.124 [2024-07-24 22:15:52.173740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.124 [2024-07-24 22:15:52.173754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.124 qpair failed and we were unable to recover it. 00:28:13.124 [2024-07-24 22:15:52.173908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.124 [2024-07-24 22:15:52.173921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.124 qpair failed and we were unable to recover it. 00:28:13.124 [2024-07-24 22:15:52.174219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.124 [2024-07-24 22:15:52.174232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.124 qpair failed and we were unable to recover it. 00:28:13.124 [2024-07-24 22:15:52.174505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.124 [2024-07-24 22:15:52.174520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.124 qpair failed and we were unable to recover it. 00:28:13.125 [2024-07-24 22:15:52.174784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.125 [2024-07-24 22:15:52.174797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.125 qpair failed and we were unable to recover it. 00:28:13.125 [2024-07-24 22:15:52.175132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.125 [2024-07-24 22:15:52.175144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.125 qpair failed and we were unable to recover it. 00:28:13.125 [2024-07-24 22:15:52.175328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.125 [2024-07-24 22:15:52.175341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.125 qpair failed and we were unable to recover it. 00:28:13.125 [2024-07-24 22:15:52.175651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.125 [2024-07-24 22:15:52.175664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.125 qpair failed and we were unable to recover it. 00:28:13.125 [2024-07-24 22:15:52.175914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.125 [2024-07-24 22:15:52.175926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.125 qpair failed and we were unable to recover it. 00:28:13.125 [2024-07-24 22:15:52.176258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.125 [2024-07-24 22:15:52.176270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.125 qpair failed and we were unable to recover it. 00:28:13.125 [2024-07-24 22:15:52.176513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.125 [2024-07-24 22:15:52.176525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.125 qpair failed and we were unable to recover it. 00:28:13.125 [2024-07-24 22:15:52.176840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.125 [2024-07-24 22:15:52.176853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.125 qpair failed and we were unable to recover it. 00:28:13.125 [2024-07-24 22:15:52.177140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.125 [2024-07-24 22:15:52.177153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.125 qpair failed and we were unable to recover it. 00:28:13.125 [2024-07-24 22:15:52.177429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.125 [2024-07-24 22:15:52.177441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.125 qpair failed and we were unable to recover it. 00:28:13.125 [2024-07-24 22:15:52.177633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.125 [2024-07-24 22:15:52.177645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.125 qpair failed and we were unable to recover it. 00:28:13.125 [2024-07-24 22:15:52.177988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.125 [2024-07-24 22:15:52.178002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.125 qpair failed and we were unable to recover it. 00:28:13.125 [2024-07-24 22:15:52.178348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.125 [2024-07-24 22:15:52.178361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.125 qpair failed and we were unable to recover it. 00:28:13.125 [2024-07-24 22:15:52.178626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.125 [2024-07-24 22:15:52.178638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.125 qpair failed and we were unable to recover it. 00:28:13.125 [2024-07-24 22:15:52.178963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.125 [2024-07-24 22:15:52.178976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.125 qpair failed and we were unable to recover it. 00:28:13.125 [2024-07-24 22:15:52.179332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.125 [2024-07-24 22:15:52.179345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.125 qpair failed and we were unable to recover it. 00:28:13.125 [2024-07-24 22:15:52.179515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.125 [2024-07-24 22:15:52.179528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.125 qpair failed and we were unable to recover it. 00:28:13.125 [2024-07-24 22:15:52.179773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.125 [2024-07-24 22:15:52.179786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.125 qpair failed and we were unable to recover it. 00:28:13.125 [2024-07-24 22:15:52.180013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.125 [2024-07-24 22:15:52.180026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.125 qpair failed and we were unable to recover it. 00:28:13.125 [2024-07-24 22:15:52.180275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.125 [2024-07-24 22:15:52.180288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.125 qpair failed and we were unable to recover it. 00:28:13.125 [2024-07-24 22:15:52.180522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.125 [2024-07-24 22:15:52.180534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.125 qpair failed and we were unable to recover it. 00:28:13.125 [2024-07-24 22:15:52.180783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.125 [2024-07-24 22:15:52.180796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.125 qpair failed and we were unable to recover it. 00:28:13.125 [2024-07-24 22:15:52.181120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.125 [2024-07-24 22:15:52.181133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.125 qpair failed and we were unable to recover it. 00:28:13.125 [2024-07-24 22:15:52.181440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.126 [2024-07-24 22:15:52.181453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.126 qpair failed and we were unable to recover it. 00:28:13.126 [2024-07-24 22:15:52.181755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.126 [2024-07-24 22:15:52.181768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.126 qpair failed and we were unable to recover it. 00:28:13.126 [2024-07-24 22:15:52.182093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.126 [2024-07-24 22:15:52.182106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.126 qpair failed and we were unable to recover it. 00:28:13.126 [2024-07-24 22:15:52.182358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.126 [2024-07-24 22:15:52.182372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.126 qpair failed and we were unable to recover it. 00:28:13.126 [2024-07-24 22:15:52.182605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.126 [2024-07-24 22:15:52.182618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.126 qpair failed and we were unable to recover it. 00:28:13.126 [2024-07-24 22:15:52.182854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.126 [2024-07-24 22:15:52.182867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.126 qpair failed and we were unable to recover it. 00:28:13.126 [2024-07-24 22:15:52.183109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.126 [2024-07-24 22:15:52.183121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.126 qpair failed and we were unable to recover it. 00:28:13.126 [2024-07-24 22:15:52.183380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.126 [2024-07-24 22:15:52.183393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.126 qpair failed and we were unable to recover it. 00:28:13.126 [2024-07-24 22:15:52.183637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.126 [2024-07-24 22:15:52.183649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.126 qpair failed and we were unable to recover it. 00:28:13.126 [2024-07-24 22:15:52.183892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.126 [2024-07-24 22:15:52.183905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.126 qpair failed and we were unable to recover it. 00:28:13.126 [2024-07-24 22:15:52.184231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.126 [2024-07-24 22:15:52.184243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.126 qpair failed and we were unable to recover it. 00:28:13.126 [2024-07-24 22:15:52.184473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.126 [2024-07-24 22:15:52.184486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.126 qpair failed and we were unable to recover it. 00:28:13.126 [2024-07-24 22:15:52.184711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.126 [2024-07-24 22:15:52.184729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.126 qpair failed and we were unable to recover it. 00:28:13.126 [2024-07-24 22:15:52.185000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.126 [2024-07-24 22:15:52.185013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.126 qpair failed and we were unable to recover it. 00:28:13.126 [2024-07-24 22:15:52.185327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.126 [2024-07-24 22:15:52.185339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.126 qpair failed and we were unable to recover it. 00:28:13.126 [2024-07-24 22:15:52.185655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.126 [2024-07-24 22:15:52.185668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.126 qpair failed and we were unable to recover it. 00:28:13.126 [2024-07-24 22:15:52.185993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.126 [2024-07-24 22:15:52.186006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.126 qpair failed and we were unable to recover it. 00:28:13.126 [2024-07-24 22:15:52.186252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.126 [2024-07-24 22:15:52.186265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.126 qpair failed and we were unable to recover it. 00:28:13.126 [2024-07-24 22:15:52.186571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.126 [2024-07-24 22:15:52.186583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.126 qpair failed and we were unable to recover it. 00:28:13.126 [2024-07-24 22:15:52.186848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.126 [2024-07-24 22:15:52.186861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.126 qpair failed and we were unable to recover it. 00:28:13.126 [2024-07-24 22:15:52.187101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.126 [2024-07-24 22:15:52.187114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.126 qpair failed and we were unable to recover it. 00:28:13.126 [2024-07-24 22:15:52.187425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.126 [2024-07-24 22:15:52.187437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.126 qpair failed and we were unable to recover it. 00:28:13.126 [2024-07-24 22:15:52.187677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.126 [2024-07-24 22:15:52.187689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.126 qpair failed and we were unable to recover it. 00:28:13.126 [2024-07-24 22:15:52.187927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.126 [2024-07-24 22:15:52.187940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.126 qpair failed and we were unable to recover it. 00:28:13.126 [2024-07-24 22:15:52.188265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.126 [2024-07-24 22:15:52.188278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.126 qpair failed and we were unable to recover it. 00:28:13.126 [2024-07-24 22:15:52.188593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.126 [2024-07-24 22:15:52.188606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.127 qpair failed and we were unable to recover it. 00:28:13.127 [2024-07-24 22:15:52.188925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.127 [2024-07-24 22:15:52.188938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.127 qpair failed and we were unable to recover it. 00:28:13.127 [2024-07-24 22:15:52.189261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.127 [2024-07-24 22:15:52.189273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.127 qpair failed and we were unable to recover it. 00:28:13.127 [2024-07-24 22:15:52.189597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.127 [2024-07-24 22:15:52.189610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.127 qpair failed and we were unable to recover it. 00:28:13.127 [2024-07-24 22:15:52.189849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.127 [2024-07-24 22:15:52.189862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.127 qpair failed and we were unable to recover it. 00:28:13.127 [2024-07-24 22:15:52.190092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.127 [2024-07-24 22:15:52.190105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.127 qpair failed and we were unable to recover it. 00:28:13.127 [2024-07-24 22:15:52.190362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.127 [2024-07-24 22:15:52.190374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.127 qpair failed and we were unable to recover it. 00:28:13.127 [2024-07-24 22:15:52.190629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.127 [2024-07-24 22:15:52.190642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.127 qpair failed and we were unable to recover it. 00:28:13.127 [2024-07-24 22:15:52.190970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.127 [2024-07-24 22:15:52.190983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.127 qpair failed and we were unable to recover it. 00:28:13.127 [2024-07-24 22:15:52.191304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.127 [2024-07-24 22:15:52.191317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.127 qpair failed and we were unable to recover it. 00:28:13.127 [2024-07-24 22:15:52.191580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.127 [2024-07-24 22:15:52.191593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.127 qpair failed and we were unable to recover it. 00:28:13.127 [2024-07-24 22:15:52.191866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.127 [2024-07-24 22:15:52.191880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.127 qpair failed and we were unable to recover it. 00:28:13.127 [2024-07-24 22:15:52.192128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.127 [2024-07-24 22:15:52.192140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.127 qpair failed and we were unable to recover it. 00:28:13.127 [2024-07-24 22:15:52.192447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.127 [2024-07-24 22:15:52.192459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.127 qpair failed and we were unable to recover it. 00:28:13.127 [2024-07-24 22:15:52.192782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.127 [2024-07-24 22:15:52.192796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.127 qpair failed and we were unable to recover it. 00:28:13.127 [2024-07-24 22:15:52.193102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.127 [2024-07-24 22:15:52.193115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.127 qpair failed and we were unable to recover it. 00:28:13.127 [2024-07-24 22:15:52.193392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.127 [2024-07-24 22:15:52.193405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.127 qpair failed and we were unable to recover it. 00:28:13.127 [2024-07-24 22:15:52.193645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.127 [2024-07-24 22:15:52.193657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.127 qpair failed and we were unable to recover it. 00:28:13.127 [2024-07-24 22:15:52.193917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.127 [2024-07-24 22:15:52.193932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.127 qpair failed and we were unable to recover it. 00:28:13.127 [2024-07-24 22:15:52.194275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.127 [2024-07-24 22:15:52.194328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.127 qpair failed and we were unable to recover it. 00:28:13.127 [2024-07-24 22:15:52.194637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.127 [2024-07-24 22:15:52.194678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.127 qpair failed and we were unable to recover it. 00:28:13.127 [2024-07-24 22:15:52.194978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.127 [2024-07-24 22:15:52.194990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.127 qpair failed and we were unable to recover it. 00:28:13.127 [2024-07-24 22:15:52.195366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.127 [2024-07-24 22:15:52.195406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.127 qpair failed and we were unable to recover it. 00:28:13.127 [2024-07-24 22:15:52.195713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.127 [2024-07-24 22:15:52.195765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.127 qpair failed and we were unable to recover it. 00:28:13.127 [2024-07-24 22:15:52.196124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.127 [2024-07-24 22:15:52.196165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.127 qpair failed and we were unable to recover it. 00:28:13.127 [2024-07-24 22:15:52.196464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.127 [2024-07-24 22:15:52.196504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.127 qpair failed and we were unable to recover it. 00:28:13.127 [2024-07-24 22:15:52.196855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.128 [2024-07-24 22:15:52.196895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.128 qpair failed and we were unable to recover it. 00:28:13.128 [2024-07-24 22:15:52.197270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.128 [2024-07-24 22:15:52.197310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.128 qpair failed and we were unable to recover it. 00:28:13.128 [2024-07-24 22:15:52.197671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.128 [2024-07-24 22:15:52.197712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.128 qpair failed and we were unable to recover it. 00:28:13.128 [2024-07-24 22:15:52.197998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.128 [2024-07-24 22:15:52.198011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.128 qpair failed and we were unable to recover it. 00:28:13.128 [2024-07-24 22:15:52.198336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.128 [2024-07-24 22:15:52.198348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.128 qpair failed and we were unable to recover it. 00:28:13.128 [2024-07-24 22:15:52.198593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.128 [2024-07-24 22:15:52.198634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.128 qpair failed and we were unable to recover it. 00:28:13.128 [2024-07-24 22:15:52.198867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.128 [2024-07-24 22:15:52.198909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.128 qpair failed and we were unable to recover it. 00:28:13.128 [2024-07-24 22:15:52.199305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.128 [2024-07-24 22:15:52.199345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.128 qpair failed and we were unable to recover it. 00:28:13.128 [2024-07-24 22:15:52.199634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.128 [2024-07-24 22:15:52.199674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.128 qpair failed and we were unable to recover it. 00:28:13.128 [2024-07-24 22:15:52.200072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.128 [2024-07-24 22:15:52.200113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.128 qpair failed and we were unable to recover it. 00:28:13.128 [2024-07-24 22:15:52.200417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.128 [2024-07-24 22:15:52.200457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.128 qpair failed and we were unable to recover it. 00:28:13.128 [2024-07-24 22:15:52.200833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.128 [2024-07-24 22:15:52.200874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.128 qpair failed and we were unable to recover it. 00:28:13.128 [2024-07-24 22:15:52.201259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.128 [2024-07-24 22:15:52.201299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.128 qpair failed and we were unable to recover it. 00:28:13.128 [2024-07-24 22:15:52.201683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.128 [2024-07-24 22:15:52.201757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.128 qpair failed and we were unable to recover it. 00:28:13.128 [2024-07-24 22:15:52.202044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.128 [2024-07-24 22:15:52.202056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.128 qpair failed and we were unable to recover it. 00:28:13.128 [2024-07-24 22:15:52.202334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.128 [2024-07-24 22:15:52.202386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.128 qpair failed and we were unable to recover it. 00:28:13.128 [2024-07-24 22:15:52.202694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.128 [2024-07-24 22:15:52.202745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.128 qpair failed and we were unable to recover it. 00:28:13.128 [2024-07-24 22:15:52.203080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.128 [2024-07-24 22:15:52.203121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.128 qpair failed and we were unable to recover it. 00:28:13.128 [2024-07-24 22:15:52.203416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.128 [2024-07-24 22:15:52.203456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.128 qpair failed and we were unable to recover it. 00:28:13.128 [2024-07-24 22:15:52.203814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.128 [2024-07-24 22:15:52.203826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.128 qpair failed and we were unable to recover it. 00:28:13.128 [2024-07-24 22:15:52.204097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.128 [2024-07-24 22:15:52.204138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.128 qpair failed and we were unable to recover it. 00:28:13.128 [2024-07-24 22:15:52.204521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.128 [2024-07-24 22:15:52.204562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.128 qpair failed and we were unable to recover it. 00:28:13.128 [2024-07-24 22:15:52.204863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.128 [2024-07-24 22:15:52.204876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.128 qpair failed and we were unable to recover it. 00:28:13.128 [2024-07-24 22:15:52.205211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.128 [2024-07-24 22:15:52.205252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.128 qpair failed and we were unable to recover it. 00:28:13.128 [2024-07-24 22:15:52.205628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.128 [2024-07-24 22:15:52.205667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.128 qpair failed and we were unable to recover it. 00:28:13.128 [2024-07-24 22:15:52.205985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.128 [2024-07-24 22:15:52.205998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.128 qpair failed and we were unable to recover it. 00:28:13.128 [2024-07-24 22:15:52.206268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.128 [2024-07-24 22:15:52.206319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.128 qpair failed and we were unable to recover it. 00:28:13.128 [2024-07-24 22:15:52.206622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.128 [2024-07-24 22:15:52.206661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.128 qpair failed and we were unable to recover it. 00:28:13.128 [2024-07-24 22:15:52.207025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.129 [2024-07-24 22:15:52.207065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.129 qpair failed and we were unable to recover it. 00:28:13.129 [2024-07-24 22:15:52.207383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.129 [2024-07-24 22:15:52.207424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.129 qpair failed and we were unable to recover it. 00:28:13.129 [2024-07-24 22:15:52.207803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.129 [2024-07-24 22:15:52.207844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.129 qpair failed and we were unable to recover it. 00:28:13.129 [2024-07-24 22:15:52.208225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.129 [2024-07-24 22:15:52.208238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.129 qpair failed and we were unable to recover it. 00:28:13.129 [2024-07-24 22:15:52.208557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.129 [2024-07-24 22:15:52.208603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.129 qpair failed and we were unable to recover it. 00:28:13.129 [2024-07-24 22:15:52.208967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.129 [2024-07-24 22:15:52.209008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.129 qpair failed and we were unable to recover it. 00:28:13.129 [2024-07-24 22:15:52.209374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.129 [2024-07-24 22:15:52.209386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.129 qpair failed and we were unable to recover it. 00:28:13.129 [2024-07-24 22:15:52.209609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.129 [2024-07-24 22:15:52.209621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.129 qpair failed and we were unable to recover it. 00:28:13.129 [2024-07-24 22:15:52.209948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.129 [2024-07-24 22:15:52.209989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.129 qpair failed and we were unable to recover it. 00:28:13.129 [2024-07-24 22:15:52.210370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.129 [2024-07-24 22:15:52.210410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.129 qpair failed and we were unable to recover it. 00:28:13.129 [2024-07-24 22:15:52.210793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.129 [2024-07-24 22:15:52.210834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.129 qpair failed and we were unable to recover it. 00:28:13.129 [2024-07-24 22:15:52.211212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.129 [2024-07-24 22:15:52.211270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.129 qpair failed and we were unable to recover it. 00:28:13.129 [2024-07-24 22:15:52.211577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.129 [2024-07-24 22:15:52.211617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.129 qpair failed and we were unable to recover it. 00:28:13.129 [2024-07-24 22:15:52.211994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.129 [2024-07-24 22:15:52.212007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.129 qpair failed and we were unable to recover it. 00:28:13.129 [2024-07-24 22:15:52.212243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.129 [2024-07-24 22:15:52.212255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.129 qpair failed and we were unable to recover it. 00:28:13.129 [2024-07-24 22:15:52.212583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.129 [2024-07-24 22:15:52.212622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.129 qpair failed and we were unable to recover it. 00:28:13.129 [2024-07-24 22:15:52.213037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.129 [2024-07-24 22:15:52.213078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.129 qpair failed and we were unable to recover it. 00:28:13.129 [2024-07-24 22:15:52.213461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.129 [2024-07-24 22:15:52.213474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.129 qpair failed and we were unable to recover it. 00:28:13.129 [2024-07-24 22:15:52.213794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.129 [2024-07-24 22:15:52.213835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.129 qpair failed and we were unable to recover it. 00:28:13.129 [2024-07-24 22:15:52.214219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.129 [2024-07-24 22:15:52.214258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.129 qpair failed and we were unable to recover it. 00:28:13.129 [2024-07-24 22:15:52.214640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.129 [2024-07-24 22:15:52.214679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.129 qpair failed and we were unable to recover it. 00:28:13.129 [2024-07-24 22:15:52.215075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.129 [2024-07-24 22:15:52.215116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.129 qpair failed and we were unable to recover it. 00:28:13.129 [2024-07-24 22:15:52.215497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.129 [2024-07-24 22:15:52.215537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.129 qpair failed and we were unable to recover it. 00:28:13.129 [2024-07-24 22:15:52.215917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.129 [2024-07-24 22:15:52.215958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.129 qpair failed and we were unable to recover it. 00:28:13.129 [2024-07-24 22:15:52.216337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.129 [2024-07-24 22:15:52.216378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.129 qpair failed and we were unable to recover it. 00:28:13.129 [2024-07-24 22:15:52.216765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.130 [2024-07-24 22:15:52.216806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.130 qpair failed and we were unable to recover it. 00:28:13.130 [2024-07-24 22:15:52.217135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.130 [2024-07-24 22:15:52.217147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.130 qpair failed and we were unable to recover it. 00:28:13.130 [2024-07-24 22:15:52.217411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.130 [2024-07-24 22:15:52.217424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.130 qpair failed and we were unable to recover it. 00:28:13.130 [2024-07-24 22:15:52.217750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.130 [2024-07-24 22:15:52.217791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.130 qpair failed and we were unable to recover it. 00:28:13.130 [2024-07-24 22:15:52.218180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.130 [2024-07-24 22:15:52.218222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.130 qpair failed and we were unable to recover it. 00:28:13.130 [2024-07-24 22:15:52.218600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.130 [2024-07-24 22:15:52.218640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.130 qpair failed and we were unable to recover it. 00:28:13.130 [2024-07-24 22:15:52.219039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.130 [2024-07-24 22:15:52.219080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.130 qpair failed and we were unable to recover it. 00:28:13.130 [2024-07-24 22:15:52.219462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.130 [2024-07-24 22:15:52.219502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.130 qpair failed and we were unable to recover it. 00:28:13.130 [2024-07-24 22:15:52.219884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.130 [2024-07-24 22:15:52.219925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.130 qpair failed and we were unable to recover it. 00:28:13.130 [2024-07-24 22:15:52.220225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.130 [2024-07-24 22:15:52.220265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.130 qpair failed and we were unable to recover it. 00:28:13.130 [2024-07-24 22:15:52.220641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.130 [2024-07-24 22:15:52.220680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.130 qpair failed and we were unable to recover it. 00:28:13.130 [2024-07-24 22:15:52.221071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.130 [2024-07-24 22:15:52.221111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.130 qpair failed and we were unable to recover it. 00:28:13.130 [2024-07-24 22:15:52.221498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.130 [2024-07-24 22:15:52.221539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.130 qpair failed and we were unable to recover it. 00:28:13.130 [2024-07-24 22:15:52.221916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.130 [2024-07-24 22:15:52.221957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.130 qpair failed and we were unable to recover it. 00:28:13.130 [2024-07-24 22:15:52.222319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.130 [2024-07-24 22:15:52.222359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.130 qpair failed and we were unable to recover it. 00:28:13.130 [2024-07-24 22:15:52.222666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.130 [2024-07-24 22:15:52.222706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.130 qpair failed and we were unable to recover it. 00:28:13.130 [2024-07-24 22:15:52.222982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.130 [2024-07-24 22:15:52.222994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.130 qpair failed and we were unable to recover it. 00:28:13.130 [2024-07-24 22:15:52.223332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.130 [2024-07-24 22:15:52.223372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.130 qpair failed and we were unable to recover it. 00:28:13.130 [2024-07-24 22:15:52.223680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.130 [2024-07-24 22:15:52.223744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.130 qpair failed and we were unable to recover it. 00:28:13.130 [2024-07-24 22:15:52.223992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.130 [2024-07-24 22:15:52.224007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.130 qpair failed and we were unable to recover it. 00:28:13.130 [2024-07-24 22:15:52.224347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.131 [2024-07-24 22:15:52.224388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.131 qpair failed and we were unable to recover it. 00:28:13.131 [2024-07-24 22:15:52.224772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.131 [2024-07-24 22:15:52.224813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.131 qpair failed and we were unable to recover it. 00:28:13.131 [2024-07-24 22:15:52.225208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.131 [2024-07-24 22:15:52.225248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.131 qpair failed and we were unable to recover it. 00:28:13.131 [2024-07-24 22:15:52.225627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.131 [2024-07-24 22:15:52.225668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.131 qpair failed and we were unable to recover it. 00:28:13.131 [2024-07-24 22:15:52.226009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.131 [2024-07-24 22:15:52.226022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.131 qpair failed and we were unable to recover it. 00:28:13.131 [2024-07-24 22:15:52.226295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.131 [2024-07-24 22:15:52.226307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.131 qpair failed and we were unable to recover it. 00:28:13.131 [2024-07-24 22:15:52.226535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.131 [2024-07-24 22:15:52.226547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.131 qpair failed and we were unable to recover it. 00:28:13.131 [2024-07-24 22:15:52.226722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.131 [2024-07-24 22:15:52.226735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.131 qpair failed and we were unable to recover it. 00:28:13.131 [2024-07-24 22:15:52.227011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.131 [2024-07-24 22:15:52.227023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.131 qpair failed and we were unable to recover it. 00:28:13.131 [2024-07-24 22:15:52.227212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.131 [2024-07-24 22:15:52.227224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.131 qpair failed and we were unable to recover it. 00:28:13.131 [2024-07-24 22:15:52.227536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.131 [2024-07-24 22:15:52.227576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.131 qpair failed and we were unable to recover it. 00:28:13.131 [2024-07-24 22:15:52.227952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.131 [2024-07-24 22:15:52.227965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.131 qpair failed and we were unable to recover it. 00:28:13.131 [2024-07-24 22:15:52.228205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.131 [2024-07-24 22:15:52.228245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.131 qpair failed and we were unable to recover it. 00:28:13.131 [2024-07-24 22:15:52.228674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.131 [2024-07-24 22:15:52.228725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.131 qpair failed and we were unable to recover it. 00:28:13.131 [2024-07-24 22:15:52.229109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.131 [2024-07-24 22:15:52.229148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.131 qpair failed and we were unable to recover it. 00:28:13.131 [2024-07-24 22:15:52.229510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.131 [2024-07-24 22:15:52.229549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.131 qpair failed and we were unable to recover it. 00:28:13.131 [2024-07-24 22:15:52.229954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.131 [2024-07-24 22:15:52.229995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.131 qpair failed and we were unable to recover it. 00:28:13.131 [2024-07-24 22:15:52.230309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.131 [2024-07-24 22:15:52.230349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.131 qpair failed and we were unable to recover it. 00:28:13.131 [2024-07-24 22:15:52.230737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.131 [2024-07-24 22:15:52.230778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.131 qpair failed and we were unable to recover it. 00:28:13.131 [2024-07-24 22:15:52.231174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.131 [2024-07-24 22:15:52.231213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.131 qpair failed and we were unable to recover it. 00:28:13.131 [2024-07-24 22:15:52.231505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.131 [2024-07-24 22:15:52.231545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.131 qpair failed and we were unable to recover it. 00:28:13.131 [2024-07-24 22:15:52.231920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.131 [2024-07-24 22:15:52.231970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.131 qpair failed and we were unable to recover it. 00:28:13.131 [2024-07-24 22:15:52.232354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.131 [2024-07-24 22:15:52.232394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.131 qpair failed and we were unable to recover it. 00:28:13.131 [2024-07-24 22:15:52.232777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.131 [2024-07-24 22:15:52.232818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.131 qpair failed and we were unable to recover it. 00:28:13.131 [2024-07-24 22:15:52.233109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.131 [2024-07-24 22:15:52.233150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.131 qpair failed and we were unable to recover it. 00:28:13.131 [2024-07-24 22:15:52.233541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.131 [2024-07-24 22:15:52.233580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.131 qpair failed and we were unable to recover it. 00:28:13.131 [2024-07-24 22:15:52.233876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.132 [2024-07-24 22:15:52.233911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.132 qpair failed and we were unable to recover it. 00:28:13.132 [2024-07-24 22:15:52.234256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.132 [2024-07-24 22:15:52.234269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.132 qpair failed and we were unable to recover it. 00:28:13.132 [2024-07-24 22:15:52.234599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.132 [2024-07-24 22:15:52.234639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.132 qpair failed and we were unable to recover it. 00:28:13.132 [2024-07-24 22:15:52.235035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.132 [2024-07-24 22:15:52.235075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.132 qpair failed and we were unable to recover it. 00:28:13.132 [2024-07-24 22:15:52.235401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.132 [2024-07-24 22:15:52.235441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.132 qpair failed and we were unable to recover it. 00:28:13.132 [2024-07-24 22:15:52.235822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.132 [2024-07-24 22:15:52.235863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.132 qpair failed and we were unable to recover it. 00:28:13.132 [2024-07-24 22:15:52.236229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.132 [2024-07-24 22:15:52.236269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.132 qpair failed and we were unable to recover it. 00:28:13.132 [2024-07-24 22:15:52.236652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.132 [2024-07-24 22:15:52.236692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.132 qpair failed and we were unable to recover it. 00:28:13.132 [2024-07-24 22:15:52.237091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.132 [2024-07-24 22:15:52.237132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.132 qpair failed and we were unable to recover it. 00:28:13.132 [2024-07-24 22:15:52.237525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.132 [2024-07-24 22:15:52.237565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.132 qpair failed and we were unable to recover it. 00:28:13.132 [2024-07-24 22:15:52.237878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.132 [2024-07-24 22:15:52.237936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.132 qpair failed and we were unable to recover it. 00:28:13.132 [2024-07-24 22:15:52.238323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.132 [2024-07-24 22:15:52.238363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.132 qpair failed and we were unable to recover it. 00:28:13.132 [2024-07-24 22:15:52.238747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.132 [2024-07-24 22:15:52.238798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.132 qpair failed and we were unable to recover it. 00:28:13.132 [2024-07-24 22:15:52.239131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.132 [2024-07-24 22:15:52.239177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.132 qpair failed and we were unable to recover it. 00:28:13.132 [2024-07-24 22:15:52.239583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.132 [2024-07-24 22:15:52.239624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.132 qpair failed and we were unable to recover it. 00:28:13.132 [2024-07-24 22:15:52.240004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.132 [2024-07-24 22:15:52.240044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.132 qpair failed and we were unable to recover it. 00:28:13.132 [2024-07-24 22:15:52.240405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.132 [2024-07-24 22:15:52.240444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.132 qpair failed and we were unable to recover it. 00:28:13.132 [2024-07-24 22:15:52.240744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.132 [2024-07-24 22:15:52.240785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.132 qpair failed and we were unable to recover it. 00:28:13.132 [2024-07-24 22:15:52.241178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.132 [2024-07-24 22:15:52.241218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.132 qpair failed and we were unable to recover it. 00:28:13.132 [2024-07-24 22:15:52.241507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.132 [2024-07-24 22:15:52.241547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.132 qpair failed and we were unable to recover it. 00:28:13.132 [2024-07-24 22:15:52.241932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.132 [2024-07-24 22:15:52.241972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.132 qpair failed and we were unable to recover it. 00:28:13.132 [2024-07-24 22:15:52.242355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.132 [2024-07-24 22:15:52.242395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.132 qpair failed and we were unable to recover it. 00:28:13.132 [2024-07-24 22:15:52.242639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.132 [2024-07-24 22:15:52.242680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.132 qpair failed and we were unable to recover it. 00:28:13.132 [2024-07-24 22:15:52.242966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.132 [2024-07-24 22:15:52.242978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.132 qpair failed and we were unable to recover it. 00:28:13.132 [2024-07-24 22:15:52.243289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.132 [2024-07-24 22:15:52.243301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.132 qpair failed and we were unable to recover it. 00:28:13.132 [2024-07-24 22:15:52.243574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.132 [2024-07-24 22:15:52.243614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.132 qpair failed and we were unable to recover it. 00:28:13.132 [2024-07-24 22:15:52.244009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.133 [2024-07-24 22:15:52.244051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.133 qpair failed and we were unable to recover it. 00:28:13.133 [2024-07-24 22:15:52.244339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.133 [2024-07-24 22:15:52.244351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.133 qpair failed and we were unable to recover it. 00:28:13.133 [2024-07-24 22:15:52.244577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.133 [2024-07-24 22:15:52.244589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.133 qpair failed and we were unable to recover it. 00:28:13.133 [2024-07-24 22:15:52.244892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.133 [2024-07-24 22:15:52.244934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.133 qpair failed and we were unable to recover it. 00:28:13.133 [2024-07-24 22:15:52.245326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.133 [2024-07-24 22:15:52.245366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.133 qpair failed and we were unable to recover it. 00:28:13.133 [2024-07-24 22:15:52.245620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.133 [2024-07-24 22:15:52.245660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.133 qpair failed and we were unable to recover it. 00:28:13.133 [2024-07-24 22:15:52.246066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.133 [2024-07-24 22:15:52.246107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.133 qpair failed and we were unable to recover it. 00:28:13.133 [2024-07-24 22:15:52.246498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.133 [2024-07-24 22:15:52.246538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.133 qpair failed and we were unable to recover it. 00:28:13.133 [2024-07-24 22:15:52.246829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.133 [2024-07-24 22:15:52.246841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.133 qpair failed and we were unable to recover it. 00:28:13.133 [2024-07-24 22:15:52.247186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.133 [2024-07-24 22:15:52.247227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.133 qpair failed and we were unable to recover it. 00:28:13.133 [2024-07-24 22:15:52.247617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.133 [2024-07-24 22:15:52.247657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.133 qpair failed and we were unable to recover it. 00:28:13.133 [2024-07-24 22:15:52.248043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.133 [2024-07-24 22:15:52.248084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.133 qpair failed and we were unable to recover it. 00:28:13.133 [2024-07-24 22:15:52.248418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.133 [2024-07-24 22:15:52.248459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.133 qpair failed and we were unable to recover it. 00:28:13.133 [2024-07-24 22:15:52.248847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.133 [2024-07-24 22:15:52.248888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.133 qpair failed and we were unable to recover it. 00:28:13.133 [2024-07-24 22:15:52.249258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.133 [2024-07-24 22:15:52.249297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.133 qpair failed and we were unable to recover it. 00:28:13.133 [2024-07-24 22:15:52.249688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.133 [2024-07-24 22:15:52.249737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.133 qpair failed and we were unable to recover it. 00:28:13.133 [2024-07-24 22:15:52.250115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.133 [2024-07-24 22:15:52.250128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.133 qpair failed and we were unable to recover it. 00:28:13.133 [2024-07-24 22:15:52.250448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.133 [2024-07-24 22:15:52.250460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.133 qpair failed and we were unable to recover it. 00:28:13.133 [2024-07-24 22:15:52.250750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.133 [2024-07-24 22:15:52.250790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.133 qpair failed and we were unable to recover it. 00:28:13.133 [2024-07-24 22:15:52.251177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.133 [2024-07-24 22:15:52.251217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.133 qpair failed and we were unable to recover it. 00:28:13.133 [2024-07-24 22:15:52.251608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.133 [2024-07-24 22:15:52.251648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.133 qpair failed and we were unable to recover it. 00:28:13.133 [2024-07-24 22:15:52.251962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.133 [2024-07-24 22:15:52.252014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.133 qpair failed and we were unable to recover it. 00:28:13.133 [2024-07-24 22:15:52.252267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.133 [2024-07-24 22:15:52.252280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.133 qpair failed and we were unable to recover it. 00:28:13.133 [2024-07-24 22:15:52.252614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.133 [2024-07-24 22:15:52.252654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.133 qpair failed and we were unable to recover it. 00:28:13.133 [2024-07-24 22:15:52.253054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.133 [2024-07-24 22:15:52.253095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.133 qpair failed and we were unable to recover it. 00:28:13.133 [2024-07-24 22:15:52.253371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.133 [2024-07-24 22:15:52.253383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.133 qpair failed and we were unable to recover it. 00:28:13.134 [2024-07-24 22:15:52.253743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.134 [2024-07-24 22:15:52.253784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.134 qpair failed and we were unable to recover it. 00:28:13.134 [2024-07-24 22:15:52.254116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.134 [2024-07-24 22:15:52.254161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.134 qpair failed and we were unable to recover it. 00:28:13.134 [2024-07-24 22:15:52.254549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.134 [2024-07-24 22:15:52.254590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.134 qpair failed and we were unable to recover it. 00:28:13.134 [2024-07-24 22:15:52.254967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.134 [2024-07-24 22:15:52.254998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.134 qpair failed and we were unable to recover it. 00:28:13.134 [2024-07-24 22:15:52.255380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.134 [2024-07-24 22:15:52.255420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.134 qpair failed and we were unable to recover it. 00:28:13.134 [2024-07-24 22:15:52.255735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.134 [2024-07-24 22:15:52.255776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.134 qpair failed and we were unable to recover it. 00:28:13.134 [2024-07-24 22:15:52.256084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.134 [2024-07-24 22:15:52.256098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.134 qpair failed and we were unable to recover it. 00:28:13.134 [2024-07-24 22:15:52.256444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.134 [2024-07-24 22:15:52.256457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.134 qpair failed and we were unable to recover it. 00:28:13.134 [2024-07-24 22:15:52.256803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.134 [2024-07-24 22:15:52.256843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.134 qpair failed and we were unable to recover it. 00:28:13.134 [2024-07-24 22:15:52.257209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.134 [2024-07-24 22:15:52.257249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.134 qpair failed and we were unable to recover it. 00:28:13.134 [2024-07-24 22:15:52.257664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.134 [2024-07-24 22:15:52.257704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.134 qpair failed and we were unable to recover it. 00:28:13.134 [2024-07-24 22:15:52.258101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.134 [2024-07-24 22:15:52.258142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.134 qpair failed and we were unable to recover it. 00:28:13.134 [2024-07-24 22:15:52.258510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.134 [2024-07-24 22:15:52.258549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.134 qpair failed and we were unable to recover it. 00:28:13.134 [2024-07-24 22:15:52.258924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.134 [2024-07-24 22:15:52.258937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.134 qpair failed and we were unable to recover it. 00:28:13.134 [2024-07-24 22:15:52.259276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.134 [2024-07-24 22:15:52.259317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.134 qpair failed and we were unable to recover it. 00:28:13.134 [2024-07-24 22:15:52.259648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.134 [2024-07-24 22:15:52.259688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.134 qpair failed and we were unable to recover it. 00:28:13.134 [2024-07-24 22:15:52.260071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.134 [2024-07-24 22:15:52.260111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.134 qpair failed and we were unable to recover it. 00:28:13.134 [2024-07-24 22:15:52.260499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.134 [2024-07-24 22:15:52.260539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.134 qpair failed and we were unable to recover it. 00:28:13.134 [2024-07-24 22:15:52.260847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.134 [2024-07-24 22:15:52.260888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.134 qpair failed and we were unable to recover it. 00:28:13.134 [2024-07-24 22:15:52.261183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.134 [2024-07-24 22:15:52.261217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.134 qpair failed and we were unable to recover it. 00:28:13.134 [2024-07-24 22:15:52.261599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.134 [2024-07-24 22:15:52.261639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.134 qpair failed and we were unable to recover it. 00:28:13.134 [2024-07-24 22:15:52.262017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.134 [2024-07-24 22:15:52.262059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.134 qpair failed and we were unable to recover it. 00:28:13.134 [2024-07-24 22:15:52.262436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.134 [2024-07-24 22:15:52.262477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.134 qpair failed and we were unable to recover it. 00:28:13.134 [2024-07-24 22:15:52.262855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.134 [2024-07-24 22:15:52.262897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.134 qpair failed and we were unable to recover it. 00:28:13.134 [2024-07-24 22:15:52.263268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.135 [2024-07-24 22:15:52.263308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.135 qpair failed and we were unable to recover it. 00:28:13.135 [2024-07-24 22:15:52.263690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.135 [2024-07-24 22:15:52.263741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.135 qpair failed and we were unable to recover it. 00:28:13.135 [2024-07-24 22:15:52.264108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.135 [2024-07-24 22:15:52.264149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.135 qpair failed and we were unable to recover it. 00:28:13.135 [2024-07-24 22:15:52.264513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.135 [2024-07-24 22:15:52.264553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.135 qpair failed and we were unable to recover it. 00:28:13.135 [2024-07-24 22:15:52.264862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.135 [2024-07-24 22:15:52.264907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.135 qpair failed and we were unable to recover it. 00:28:13.135 [2024-07-24 22:15:52.265149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.135 [2024-07-24 22:15:52.265161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.135 qpair failed and we were unable to recover it. 00:28:13.135 [2024-07-24 22:15:52.265473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.135 [2024-07-24 22:15:52.265514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.135 qpair failed and we were unable to recover it. 00:28:13.135 [2024-07-24 22:15:52.265893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.135 [2024-07-24 22:15:52.265934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.135 qpair failed and we were unable to recover it. 00:28:13.135 [2024-07-24 22:15:52.266324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.135 [2024-07-24 22:15:52.266364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.135 qpair failed and we were unable to recover it. 00:28:13.135 [2024-07-24 22:15:52.266747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.135 [2024-07-24 22:15:52.266788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.135 qpair failed and we were unable to recover it. 00:28:13.135 [2024-07-24 22:15:52.267116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.135 [2024-07-24 22:15:52.267128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.135 qpair failed and we were unable to recover it. 00:28:13.135 [2024-07-24 22:15:52.267444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.135 [2024-07-24 22:15:52.267485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.135 qpair failed and we were unable to recover it. 00:28:13.135 [2024-07-24 22:15:52.267866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.135 [2024-07-24 22:15:52.267907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.135 qpair failed and we were unable to recover it. 00:28:13.135 [2024-07-24 22:15:52.268304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.135 [2024-07-24 22:15:52.268345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.135 qpair failed and we were unable to recover it. 00:28:13.135 [2024-07-24 22:15:52.268736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.135 [2024-07-24 22:15:52.268778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.135 qpair failed and we were unable to recover it. 00:28:13.135 [2024-07-24 22:15:52.269164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.135 [2024-07-24 22:15:52.269205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.135 qpair failed and we were unable to recover it. 00:28:13.135 [2024-07-24 22:15:52.269610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.135 [2024-07-24 22:15:52.269651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.135 qpair failed and we were unable to recover it. 00:28:13.135 [2024-07-24 22:15:52.270052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.135 [2024-07-24 22:15:52.270099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.135 qpair failed and we were unable to recover it. 00:28:13.135 [2024-07-24 22:15:52.270394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.135 [2024-07-24 22:15:52.270406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.135 qpair failed and we were unable to recover it. 00:28:13.135 [2024-07-24 22:15:52.270752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.135 [2024-07-24 22:15:52.270794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.135 qpair failed and we were unable to recover it. 00:28:13.135 [2024-07-24 22:15:52.271178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.135 [2024-07-24 22:15:52.271219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.135 qpair failed and we were unable to recover it. 00:28:13.135 [2024-07-24 22:15:52.271600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.135 [2024-07-24 22:15:52.271640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.135 qpair failed and we were unable to recover it. 00:28:13.135 [2024-07-24 22:15:52.272030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.135 [2024-07-24 22:15:52.272084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.135 qpair failed and we were unable to recover it. 00:28:13.135 [2024-07-24 22:15:52.272466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.135 [2024-07-24 22:15:52.272506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.135 qpair failed and we were unable to recover it. 00:28:13.135 [2024-07-24 22:15:52.272797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.135 [2024-07-24 22:15:52.272838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.135 qpair failed and we were unable to recover it. 00:28:13.135 [2024-07-24 22:15:52.273238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.135 [2024-07-24 22:15:52.273279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.135 qpair failed and we were unable to recover it. 00:28:13.135 [2024-07-24 22:15:52.273596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.135 [2024-07-24 22:15:52.273637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.135 qpair failed and we were unable to recover it. 00:28:13.135 [2024-07-24 22:15:52.273958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.135 [2024-07-24 22:15:52.273971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.135 qpair failed and we were unable to recover it. 00:28:13.135 [2024-07-24 22:15:52.274305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.136 [2024-07-24 22:15:52.274345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.136 qpair failed and we were unable to recover it. 00:28:13.136 [2024-07-24 22:15:52.274636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.136 [2024-07-24 22:15:52.274676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.136 qpair failed and we were unable to recover it. 00:28:13.136 [2024-07-24 22:15:52.275081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.136 [2024-07-24 22:15:52.275123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.136 qpair failed and we were unable to recover it. 00:28:13.136 [2024-07-24 22:15:52.275521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.136 [2024-07-24 22:15:52.275561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.136 qpair failed and we were unable to recover it. 00:28:13.136 [2024-07-24 22:15:52.275870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.136 [2024-07-24 22:15:52.275911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.136 qpair failed and we were unable to recover it. 00:28:13.136 [2024-07-24 22:15:52.276306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.136 [2024-07-24 22:15:52.276346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.136 qpair failed and we were unable to recover it. 00:28:13.136 [2024-07-24 22:15:52.276713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.136 [2024-07-24 22:15:52.276766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.136 qpair failed and we were unable to recover it. 00:28:13.136 [2024-07-24 22:15:52.277143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.136 [2024-07-24 22:15:52.277155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.136 qpair failed and we were unable to recover it. 00:28:13.136 [2024-07-24 22:15:52.277491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.136 [2024-07-24 22:15:52.277531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.136 qpair failed and we were unable to recover it. 00:28:13.136 [2024-07-24 22:15:52.277915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.136 [2024-07-24 22:15:52.277956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.136 qpair failed and we were unable to recover it. 00:28:13.136 [2024-07-24 22:15:52.278353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.136 [2024-07-24 22:15:52.278394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.136 qpair failed and we were unable to recover it. 00:28:13.136 [2024-07-24 22:15:52.278758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.136 [2024-07-24 22:15:52.278799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.136 qpair failed and we were unable to recover it. 00:28:13.136 [2024-07-24 22:15:52.279161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.136 [2024-07-24 22:15:52.279201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.136 qpair failed and we were unable to recover it. 00:28:13.136 [2024-07-24 22:15:52.279590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.136 [2024-07-24 22:15:52.279630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.136 qpair failed and we were unable to recover it. 00:28:13.136 [2024-07-24 22:15:52.280006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.136 [2024-07-24 22:15:52.280047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.136 qpair failed and we were unable to recover it. 00:28:13.136 [2024-07-24 22:15:52.280433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.136 [2024-07-24 22:15:52.280472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.136 qpair failed and we were unable to recover it. 00:28:13.136 [2024-07-24 22:15:52.280842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.136 [2024-07-24 22:15:52.280883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.136 qpair failed and we were unable to recover it. 00:28:13.136 [2024-07-24 22:15:52.281200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.136 [2024-07-24 22:15:52.281240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.136 qpair failed and we were unable to recover it. 00:28:13.136 [2024-07-24 22:15:52.281558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.136 [2024-07-24 22:15:52.281599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.136 qpair failed and we were unable to recover it. 00:28:13.136 [2024-07-24 22:15:52.281915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.136 [2024-07-24 22:15:52.281928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.136 qpair failed and we were unable to recover it. 00:28:13.136 [2024-07-24 22:15:52.282262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.136 [2024-07-24 22:15:52.282302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.136 qpair failed and we were unable to recover it. 00:28:13.136 [2024-07-24 22:15:52.282694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.136 [2024-07-24 22:15:52.282745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.136 qpair failed and we were unable to recover it. 00:28:13.136 [2024-07-24 22:15:52.283121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.136 [2024-07-24 22:15:52.283161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.136 qpair failed and we were unable to recover it. 00:28:13.136 [2024-07-24 22:15:52.283516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.136 [2024-07-24 22:15:52.283545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.136 qpair failed and we were unable to recover it. 00:28:13.136 [2024-07-24 22:15:52.283871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.136 [2024-07-24 22:15:52.283912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.136 qpair failed and we were unable to recover it. 00:28:13.136 [2024-07-24 22:15:52.284246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.136 [2024-07-24 22:15:52.284286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.136 qpair failed and we were unable to recover it. 00:28:13.136 [2024-07-24 22:15:52.284680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.136 [2024-07-24 22:15:52.284732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.136 qpair failed and we were unable to recover it. 00:28:13.136 [2024-07-24 22:15:52.285120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.136 [2024-07-24 22:15:52.285161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.136 qpair failed and we were unable to recover it. 00:28:13.137 [2024-07-24 22:15:52.285544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.137 [2024-07-24 22:15:52.285585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.137 qpair failed and we were unable to recover it. 00:28:13.137 [2024-07-24 22:15:52.285972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.137 [2024-07-24 22:15:52.286029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.137 qpair failed and we were unable to recover it. 00:28:13.137 [2024-07-24 22:15:52.286278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.137 [2024-07-24 22:15:52.286290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.137 qpair failed and we were unable to recover it. 00:28:13.137 [2024-07-24 22:15:52.286616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.137 [2024-07-24 22:15:52.286656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.137 qpair failed and we were unable to recover it. 00:28:13.137 [2024-07-24 22:15:52.287049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.137 [2024-07-24 22:15:52.287090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.137 qpair failed and we were unable to recover it. 00:28:13.137 [2024-07-24 22:15:52.287421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.137 [2024-07-24 22:15:52.287433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.137 qpair failed and we were unable to recover it. 00:28:13.137 [2024-07-24 22:15:52.287681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.137 [2024-07-24 22:15:52.287694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.137 qpair failed and we were unable to recover it. 00:28:13.137 [2024-07-24 22:15:52.288006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.137 [2024-07-24 22:15:52.288036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.137 qpair failed and we were unable to recover it. 00:28:13.137 [2024-07-24 22:15:52.288425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.137 [2024-07-24 22:15:52.288464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.137 qpair failed and we were unable to recover it. 00:28:13.137 [2024-07-24 22:15:52.288856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.137 [2024-07-24 22:15:52.288897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.137 qpair failed and we were unable to recover it. 00:28:13.137 [2024-07-24 22:15:52.289288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.137 [2024-07-24 22:15:52.289328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.137 qpair failed and we were unable to recover it. 00:28:13.137 [2024-07-24 22:15:52.289639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.137 [2024-07-24 22:15:52.289680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.137 qpair failed and we were unable to recover it. 00:28:13.137 [2024-07-24 22:15:52.290079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.137 [2024-07-24 22:15:52.290120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.137 qpair failed and we were unable to recover it. 00:28:13.137 [2024-07-24 22:15:52.290513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.137 [2024-07-24 22:15:52.290552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.137 qpair failed and we were unable to recover it. 00:28:13.137 [2024-07-24 22:15:52.290870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.137 [2024-07-24 22:15:52.290911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.137 qpair failed and we were unable to recover it. 00:28:13.137 [2024-07-24 22:15:52.291300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.137 [2024-07-24 22:15:52.291341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.137 qpair failed and we were unable to recover it. 00:28:13.137 [2024-07-24 22:15:52.291645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.137 [2024-07-24 22:15:52.291686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.137 qpair failed and we were unable to recover it. 00:28:13.137 [2024-07-24 22:15:52.292078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.137 [2024-07-24 22:15:52.292119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.137 qpair failed and we were unable to recover it. 00:28:13.137 [2024-07-24 22:15:52.292500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.137 [2024-07-24 22:15:52.292512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.137 qpair failed and we were unable to recover it. 00:28:13.137 [2024-07-24 22:15:52.292764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.137 [2024-07-24 22:15:52.292777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.137 qpair failed and we were unable to recover it. 00:28:13.137 [2024-07-24 22:15:52.293086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.137 [2024-07-24 22:15:52.293126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.137 qpair failed and we were unable to recover it. 00:28:13.137 [2024-07-24 22:15:52.293481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.137 [2024-07-24 22:15:52.293521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.137 qpair failed and we were unable to recover it. 00:28:13.137 [2024-07-24 22:15:52.293843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.137 [2024-07-24 22:15:52.293885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.137 qpair failed and we were unable to recover it. 00:28:13.137 [2024-07-24 22:15:52.294270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.137 [2024-07-24 22:15:52.294311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.137 qpair failed and we were unable to recover it. 00:28:13.137 [2024-07-24 22:15:52.294695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.137 [2024-07-24 22:15:52.294748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.137 qpair failed and we were unable to recover it. 00:28:13.137 [2024-07-24 22:15:52.295145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.137 [2024-07-24 22:15:52.295185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.137 qpair failed and we were unable to recover it. 00:28:13.137 [2024-07-24 22:15:52.295548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.137 [2024-07-24 22:15:52.295589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.137 qpair failed and we were unable to recover it. 00:28:13.137 [2024-07-24 22:15:52.295985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.137 [2024-07-24 22:15:52.296026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.137 qpair failed and we were unable to recover it. 00:28:13.137 [2024-07-24 22:15:52.296420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.137 [2024-07-24 22:15:52.296461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.137 qpair failed and we were unable to recover it. 00:28:13.137 [2024-07-24 22:15:52.296831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.137 [2024-07-24 22:15:52.296873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.137 qpair failed and we were unable to recover it. 00:28:13.137 [2024-07-24 22:15:52.297192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.137 [2024-07-24 22:15:52.297233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.137 qpair failed and we were unable to recover it. 00:28:13.137 [2024-07-24 22:15:52.297614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.137 [2024-07-24 22:15:52.297654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.137 qpair failed and we were unable to recover it. 00:28:13.137 [2024-07-24 22:15:52.298046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.137 [2024-07-24 22:15:52.298087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.137 qpair failed and we were unable to recover it. 00:28:13.137 [2024-07-24 22:15:52.298473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.137 [2024-07-24 22:15:52.298513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.137 qpair failed and we were unable to recover it. 00:28:13.137 [2024-07-24 22:15:52.298899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.137 [2024-07-24 22:15:52.298940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.137 qpair failed and we were unable to recover it. 00:28:13.137 [2024-07-24 22:15:52.299314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.137 [2024-07-24 22:15:52.299366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.138 qpair failed and we were unable to recover it. 00:28:13.138 [2024-07-24 22:15:52.299748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.138 [2024-07-24 22:15:52.299790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.138 qpair failed and we were unable to recover it. 00:28:13.138 [2024-07-24 22:15:52.300176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.138 [2024-07-24 22:15:52.300216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.138 qpair failed and we were unable to recover it. 00:28:13.138 [2024-07-24 22:15:52.300584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.138 [2024-07-24 22:15:52.300624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.138 qpair failed and we were unable to recover it. 00:28:13.138 [2024-07-24 22:15:52.300914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.138 [2024-07-24 22:15:52.300955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.138 qpair failed and we were unable to recover it. 00:28:13.138 [2024-07-24 22:15:52.301341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.138 [2024-07-24 22:15:52.301381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.138 qpair failed and we were unable to recover it. 00:28:13.138 [2024-07-24 22:15:52.301692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.138 [2024-07-24 22:15:52.301762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.138 qpair failed and we were unable to recover it. 00:28:13.138 [2024-07-24 22:15:52.302174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.138 [2024-07-24 22:15:52.302215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.138 qpair failed and we were unable to recover it. 00:28:13.138 [2024-07-24 22:15:52.302602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.138 [2024-07-24 22:15:52.302641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.138 qpair failed and we were unable to recover it. 00:28:13.138 [2024-07-24 22:15:52.303027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.138 [2024-07-24 22:15:52.303068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.138 qpair failed and we were unable to recover it. 00:28:13.138 [2024-07-24 22:15:52.303377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.138 [2024-07-24 22:15:52.303417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.138 qpair failed and we were unable to recover it. 00:28:13.138 [2024-07-24 22:15:52.303804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.138 [2024-07-24 22:15:52.303846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.138 qpair failed and we were unable to recover it. 00:28:13.138 [2024-07-24 22:15:52.304174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.138 [2024-07-24 22:15:52.304214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.138 qpair failed and we were unable to recover it. 00:28:13.138 [2024-07-24 22:15:52.304500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.138 [2024-07-24 22:15:52.304539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.138 qpair failed and we were unable to recover it. 00:28:13.138 [2024-07-24 22:15:52.304933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.138 [2024-07-24 22:15:52.304974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.138 qpair failed and we were unable to recover it. 00:28:13.138 [2024-07-24 22:15:52.305275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.138 [2024-07-24 22:15:52.305287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.138 qpair failed and we were unable to recover it. 00:28:13.138 [2024-07-24 22:15:52.305641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.138 [2024-07-24 22:15:52.305681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.138 qpair failed and we were unable to recover it. 00:28:13.138 [2024-07-24 22:15:52.306039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.138 [2024-07-24 22:15:52.306080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.138 qpair failed and we were unable to recover it. 00:28:13.138 [2024-07-24 22:15:52.306384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.138 [2024-07-24 22:15:52.306396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.138 qpair failed and we were unable to recover it. 00:28:13.138 [2024-07-24 22:15:52.306663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.138 [2024-07-24 22:15:52.306675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.138 qpair failed and we were unable to recover it. 00:28:13.138 [2024-07-24 22:15:52.307004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.138 [2024-07-24 22:15:52.307017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.138 qpair failed and we were unable to recover it. 00:28:13.138 [2024-07-24 22:15:52.307247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.138 [2024-07-24 22:15:52.307259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.138 qpair failed and we were unable to recover it. 00:28:13.138 [2024-07-24 22:15:52.307526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.138 [2024-07-24 22:15:52.307569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.138 qpair failed and we were unable to recover it. 00:28:13.138 [2024-07-24 22:15:52.307959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.138 [2024-07-24 22:15:52.308001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.138 qpair failed and we were unable to recover it. 00:28:13.138 [2024-07-24 22:15:52.308376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.138 [2024-07-24 22:15:52.308389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.138 qpair failed and we were unable to recover it. 00:28:13.138 [2024-07-24 22:15:52.308629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.138 [2024-07-24 22:15:52.308642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.138 qpair failed and we were unable to recover it. 00:28:13.138 [2024-07-24 22:15:52.308897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.138 [2024-07-24 22:15:52.308911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.138 qpair failed and we were unable to recover it. 00:28:13.138 [2024-07-24 22:15:52.309164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.138 [2024-07-24 22:15:52.309201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.138 qpair failed and we were unable to recover it. 00:28:13.138 [2024-07-24 22:15:52.309587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.138 [2024-07-24 22:15:52.309628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.138 qpair failed and we were unable to recover it. 00:28:13.138 [2024-07-24 22:15:52.310007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.138 [2024-07-24 22:15:52.310063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.138 qpair failed and we were unable to recover it. 00:28:13.138 [2024-07-24 22:15:52.310430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.138 [2024-07-24 22:15:52.310470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.138 qpair failed and we were unable to recover it. 00:28:13.138 [2024-07-24 22:15:52.310803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.138 [2024-07-24 22:15:52.310844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.138 qpair failed and we were unable to recover it. 00:28:13.138 [2024-07-24 22:15:52.311167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.138 [2024-07-24 22:15:52.311180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.138 qpair failed and we were unable to recover it. 00:28:13.138 [2024-07-24 22:15:52.311486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.138 [2024-07-24 22:15:52.311510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.138 qpair failed and we were unable to recover it. 00:28:13.138 [2024-07-24 22:15:52.311838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.138 [2024-07-24 22:15:52.311879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.138 qpair failed and we were unable to recover it. 00:28:13.138 [2024-07-24 22:15:52.312256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.138 [2024-07-24 22:15:52.312296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.138 qpair failed and we were unable to recover it. 00:28:13.138 [2024-07-24 22:15:52.312683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.138 [2024-07-24 22:15:52.312733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.138 qpair failed and we were unable to recover it. 00:28:13.138 [2024-07-24 22:15:52.313046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.138 [2024-07-24 22:15:52.313086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.139 qpair failed and we were unable to recover it. 00:28:13.410 [2024-07-24 22:15:52.313374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.410 [2024-07-24 22:15:52.313415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.410 qpair failed and we were unable to recover it. 00:28:13.410 [2024-07-24 22:15:52.313788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.410 [2024-07-24 22:15:52.313830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.410 qpair failed and we were unable to recover it. 00:28:13.410 [2024-07-24 22:15:52.314215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.410 [2024-07-24 22:15:52.314256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.410 qpair failed and we were unable to recover it. 00:28:13.410 [2024-07-24 22:15:52.314645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.410 [2024-07-24 22:15:52.314685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.410 qpair failed and we were unable to recover it. 00:28:13.410 [2024-07-24 22:15:52.315021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.410 [2024-07-24 22:15:52.315061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.410 qpair failed and we were unable to recover it. 00:28:13.410 [2024-07-24 22:15:52.315362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.410 [2024-07-24 22:15:52.315375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.410 qpair failed and we were unable to recover it. 00:28:13.410 [2024-07-24 22:15:52.315676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.410 [2024-07-24 22:15:52.315689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.410 qpair failed and we were unable to recover it. 00:28:13.410 [2024-07-24 22:15:52.316040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.410 [2024-07-24 22:15:52.316081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.410 qpair failed and we were unable to recover it. 00:28:13.410 [2024-07-24 22:15:52.316415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.410 [2024-07-24 22:15:52.316461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.410 qpair failed and we were unable to recover it. 00:28:13.410 [2024-07-24 22:15:52.316856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.410 [2024-07-24 22:15:52.316897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.410 qpair failed and we were unable to recover it. 00:28:13.410 [2024-07-24 22:15:52.317273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.410 [2024-07-24 22:15:52.317313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.410 qpair failed and we were unable to recover it. 00:28:13.410 [2024-07-24 22:15:52.317699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.410 [2024-07-24 22:15:52.317750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.410 qpair failed and we were unable to recover it. 00:28:13.410 [2024-07-24 22:15:52.318137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.410 [2024-07-24 22:15:52.318177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.410 qpair failed and we were unable to recover it. 00:28:13.410 [2024-07-24 22:15:52.318442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.411 [2024-07-24 22:15:52.318483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.411 qpair failed and we were unable to recover it. 00:28:13.411 [2024-07-24 22:15:52.318869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.411 [2024-07-24 22:15:52.318908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.411 qpair failed and we were unable to recover it. 00:28:13.411 [2024-07-24 22:15:52.319288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.411 [2024-07-24 22:15:52.319328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.411 qpair failed and we were unable to recover it. 00:28:13.411 [2024-07-24 22:15:52.319646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.411 [2024-07-24 22:15:52.319685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.411 qpair failed and we were unable to recover it. 00:28:13.411 [2024-07-24 22:15:52.320015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.411 [2024-07-24 22:15:52.320056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.411 qpair failed and we were unable to recover it. 00:28:13.411 [2024-07-24 22:15:52.320444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.411 [2024-07-24 22:15:52.320484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.411 qpair failed and we were unable to recover it. 00:28:13.411 [2024-07-24 22:15:52.320729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.411 [2024-07-24 22:15:52.320770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.411 qpair failed and we were unable to recover it. 00:28:13.411 [2024-07-24 22:15:52.321146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.411 [2024-07-24 22:15:52.321158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.411 qpair failed and we were unable to recover it. 00:28:13.411 [2024-07-24 22:15:52.321412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.411 [2024-07-24 22:15:52.321425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.411 qpair failed and we were unable to recover it. 00:28:13.411 [2024-07-24 22:15:52.321763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.411 [2024-07-24 22:15:52.321805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.411 qpair failed and we were unable to recover it. 00:28:13.411 [2024-07-24 22:15:52.322116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.411 [2024-07-24 22:15:52.322157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.411 qpair failed and we were unable to recover it. 00:28:13.411 [2024-07-24 22:15:52.322545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.411 [2024-07-24 22:15:52.322584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.411 qpair failed and we were unable to recover it. 00:28:13.411 [2024-07-24 22:15:52.322972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.411 [2024-07-24 22:15:52.323013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.411 qpair failed and we were unable to recover it. 00:28:13.411 [2024-07-24 22:15:52.323395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.411 [2024-07-24 22:15:52.323436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.411 qpair failed and we were unable to recover it. 00:28:13.411 [2024-07-24 22:15:52.323739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.411 [2024-07-24 22:15:52.323779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.411 qpair failed and we were unable to recover it. 00:28:13.411 [2024-07-24 22:15:52.324168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.411 [2024-07-24 22:15:52.324208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.411 qpair failed and we were unable to recover it. 00:28:13.411 [2024-07-24 22:15:52.324616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.411 [2024-07-24 22:15:52.324657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.411 qpair failed and we were unable to recover it. 00:28:13.411 [2024-07-24 22:15:52.325075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.411 [2024-07-24 22:15:52.325117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.411 qpair failed and we were unable to recover it. 00:28:13.411 [2024-07-24 22:15:52.325360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.411 [2024-07-24 22:15:52.325400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.411 qpair failed and we were unable to recover it. 00:28:13.411 [2024-07-24 22:15:52.325781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.411 [2024-07-24 22:15:52.325822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.411 qpair failed and we were unable to recover it. 00:28:13.411 [2024-07-24 22:15:52.326190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.411 [2024-07-24 22:15:52.326230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.411 qpair failed and we were unable to recover it. 00:28:13.411 [2024-07-24 22:15:52.326463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.411 [2024-07-24 22:15:52.326504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.411 qpair failed and we were unable to recover it. 00:28:13.411 [2024-07-24 22:15:52.326905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.411 [2024-07-24 22:15:52.326946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.411 qpair failed and we were unable to recover it. 00:28:13.411 [2024-07-24 22:15:52.327256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.411 [2024-07-24 22:15:52.327297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.411 qpair failed and we were unable to recover it. 00:28:13.411 [2024-07-24 22:15:52.327541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.411 [2024-07-24 22:15:52.327582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.411 qpair failed and we were unable to recover it. 00:28:13.411 [2024-07-24 22:15:52.327965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.411 [2024-07-24 22:15:52.328005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.411 qpair failed and we were unable to recover it. 00:28:13.411 [2024-07-24 22:15:52.328386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.411 [2024-07-24 22:15:52.328425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.411 qpair failed and we were unable to recover it. 00:28:13.411 [2024-07-24 22:15:52.328735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.411 [2024-07-24 22:15:52.328775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.411 qpair failed and we were unable to recover it. 00:28:13.411 [2024-07-24 22:15:52.329164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.411 [2024-07-24 22:15:52.329204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.411 qpair failed and we were unable to recover it. 00:28:13.411 [2024-07-24 22:15:52.329585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.411 [2024-07-24 22:15:52.329626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.411 qpair failed and we were unable to recover it. 00:28:13.411 [2024-07-24 22:15:52.330007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.411 [2024-07-24 22:15:52.330048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.411 qpair failed and we were unable to recover it. 00:28:13.411 [2024-07-24 22:15:52.330455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.411 [2024-07-24 22:15:52.330494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.411 qpair failed and we were unable to recover it. 00:28:13.411 [2024-07-24 22:15:52.330818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.411 [2024-07-24 22:15:52.330860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.411 qpair failed and we were unable to recover it. 00:28:13.411 [2024-07-24 22:15:52.331204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.411 [2024-07-24 22:15:52.331244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.411 qpair failed and we were unable to recover it. 00:28:13.411 [2024-07-24 22:15:52.331555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.411 [2024-07-24 22:15:52.331595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.411 qpair failed and we were unable to recover it. 00:28:13.411 [2024-07-24 22:15:52.331980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.411 [2024-07-24 22:15:52.332026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.411 qpair failed and we were unable to recover it. 00:28:13.411 [2024-07-24 22:15:52.332399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.411 [2024-07-24 22:15:52.332439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.411 qpair failed and we were unable to recover it. 00:28:13.412 [2024-07-24 22:15:52.332825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.412 [2024-07-24 22:15:52.332866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.412 qpair failed and we were unable to recover it. 00:28:13.412 [2024-07-24 22:15:52.333255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.412 [2024-07-24 22:15:52.333296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.412 qpair failed and we were unable to recover it. 00:28:13.412 [2024-07-24 22:15:52.333679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.412 [2024-07-24 22:15:52.333731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.412 qpair failed and we were unable to recover it. 00:28:13.412 [2024-07-24 22:15:52.334069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.412 [2024-07-24 22:15:52.334109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.412 qpair failed and we were unable to recover it. 00:28:13.412 [2024-07-24 22:15:52.334446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.412 [2024-07-24 22:15:52.334486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.412 qpair failed and we were unable to recover it. 00:28:13.412 [2024-07-24 22:15:52.334901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.412 [2024-07-24 22:15:52.334942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.412 qpair failed and we were unable to recover it. 00:28:13.412 [2024-07-24 22:15:52.335336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.412 [2024-07-24 22:15:52.335376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.412 qpair failed and we were unable to recover it. 00:28:13.412 [2024-07-24 22:15:52.335766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.412 [2024-07-24 22:15:52.335807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.412 qpair failed and we were unable to recover it. 00:28:13.412 [2024-07-24 22:15:52.336096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.412 [2024-07-24 22:15:52.336136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.412 qpair failed and we were unable to recover it. 00:28:13.412 [2024-07-24 22:15:52.336499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.412 [2024-07-24 22:15:52.336538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.412 qpair failed and we were unable to recover it. 00:28:13.412 [2024-07-24 22:15:52.336757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.412 [2024-07-24 22:15:52.336799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.412 qpair failed and we were unable to recover it. 00:28:13.412 [2024-07-24 22:15:52.337117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.412 [2024-07-24 22:15:52.337130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.412 qpair failed and we were unable to recover it. 00:28:13.412 [2024-07-24 22:15:52.337457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.412 [2024-07-24 22:15:52.337498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.412 qpair failed and we were unable to recover it. 00:28:13.412 [2024-07-24 22:15:52.337751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.412 [2024-07-24 22:15:52.337792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.412 qpair failed and we were unable to recover it. 00:28:13.412 [2024-07-24 22:15:52.338152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.412 [2024-07-24 22:15:52.338192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.412 qpair failed and we were unable to recover it. 00:28:13.412 [2024-07-24 22:15:52.338605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.412 [2024-07-24 22:15:52.338646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.412 qpair failed and we were unable to recover it. 00:28:13.412 [2024-07-24 22:15:52.338914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.412 [2024-07-24 22:15:52.338955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.412 qpair failed and we were unable to recover it. 00:28:13.412 [2024-07-24 22:15:52.339314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.412 [2024-07-24 22:15:52.339353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.412 qpair failed and we were unable to recover it. 00:28:13.412 [2024-07-24 22:15:52.339641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.412 [2024-07-24 22:15:52.339682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.412 qpair failed and we were unable to recover it. 00:28:13.412 [2024-07-24 22:15:52.340096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.412 [2024-07-24 22:15:52.340137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.412 qpair failed and we were unable to recover it. 00:28:13.412 [2024-07-24 22:15:52.340525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.412 [2024-07-24 22:15:52.340565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.412 qpair failed and we were unable to recover it. 00:28:13.412 [2024-07-24 22:15:52.340928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.412 [2024-07-24 22:15:52.340969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.412 qpair failed and we were unable to recover it. 00:28:13.412 [2024-07-24 22:15:52.341358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.412 [2024-07-24 22:15:52.341398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.412 qpair failed and we were unable to recover it. 00:28:13.412 [2024-07-24 22:15:52.341789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.412 [2024-07-24 22:15:52.341845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.412 qpair failed and we were unable to recover it. 00:28:13.412 [2024-07-24 22:15:52.342176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.412 [2024-07-24 22:15:52.342189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.412 qpair failed and we were unable to recover it. 00:28:13.412 [2024-07-24 22:15:52.342441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.412 [2024-07-24 22:15:52.342454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.412 qpair failed and we were unable to recover it. 00:28:13.412 [2024-07-24 22:15:52.342781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.412 [2024-07-24 22:15:52.342794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.412 qpair failed and we were unable to recover it. 00:28:13.412 [2024-07-24 22:15:52.343058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.412 [2024-07-24 22:15:52.343098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.412 qpair failed and we were unable to recover it. 00:28:13.412 [2024-07-24 22:15:52.343460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.412 [2024-07-24 22:15:52.343500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.412 qpair failed and we were unable to recover it. 00:28:13.412 [2024-07-24 22:15:52.343742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.412 [2024-07-24 22:15:52.343783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.412 qpair failed and we were unable to recover it. 00:28:13.412 [2024-07-24 22:15:52.344163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.412 [2024-07-24 22:15:52.344202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.412 qpair failed and we were unable to recover it. 00:28:13.412 [2024-07-24 22:15:52.344589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.412 [2024-07-24 22:15:52.344629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.412 qpair failed and we were unable to recover it. 00:28:13.412 [2024-07-24 22:15:52.345040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.412 [2024-07-24 22:15:52.345081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.412 qpair failed and we were unable to recover it. 00:28:13.412 [2024-07-24 22:15:52.345414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.412 [2024-07-24 22:15:52.345453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.412 qpair failed and we were unable to recover it. 00:28:13.412 [2024-07-24 22:15:52.345779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.412 [2024-07-24 22:15:52.345820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.412 qpair failed and we were unable to recover it. 00:28:13.412 [2024-07-24 22:15:52.346208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.413 [2024-07-24 22:15:52.346249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.413 qpair failed and we were unable to recover it. 00:28:13.413 [2024-07-24 22:15:52.346483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.413 [2024-07-24 22:15:52.346523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.413 qpair failed and we were unable to recover it. 00:28:13.413 [2024-07-24 22:15:52.346886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.413 [2024-07-24 22:15:52.346927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.413 qpair failed and we were unable to recover it. 00:28:13.413 [2024-07-24 22:15:52.347279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.413 [2024-07-24 22:15:52.347325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.413 qpair failed and we were unable to recover it. 00:28:13.413 [2024-07-24 22:15:52.347727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.413 [2024-07-24 22:15:52.347768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.413 qpair failed and we were unable to recover it. 00:28:13.413 [2024-07-24 22:15:52.348076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.413 [2024-07-24 22:15:52.348117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.413 qpair failed and we were unable to recover it. 00:28:13.413 [2024-07-24 22:15:52.348476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.413 [2024-07-24 22:15:52.348515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.413 qpair failed and we were unable to recover it. 00:28:13.413 [2024-07-24 22:15:52.348926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.413 [2024-07-24 22:15:52.348968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.413 qpair failed and we were unable to recover it. 00:28:13.413 [2024-07-24 22:15:52.349283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.413 [2024-07-24 22:15:52.349324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.413 qpair failed and we were unable to recover it. 00:28:13.413 [2024-07-24 22:15:52.349679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.413 [2024-07-24 22:15:52.349691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.413 qpair failed and we were unable to recover it. 00:28:13.413 [2024-07-24 22:15:52.349919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.413 [2024-07-24 22:15:52.349932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.413 qpair failed and we were unable to recover it. 00:28:13.413 [2024-07-24 22:15:52.350178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.413 [2024-07-24 22:15:52.350190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.413 qpair failed and we were unable to recover it. 00:28:13.413 [2024-07-24 22:15:52.350433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.413 [2024-07-24 22:15:52.350446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.413 qpair failed and we were unable to recover it. 00:28:13.413 [2024-07-24 22:15:52.350770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.413 [2024-07-24 22:15:52.350782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.413 qpair failed and we were unable to recover it. 00:28:13.413 [2024-07-24 22:15:52.351105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.413 [2024-07-24 22:15:52.351146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.413 qpair failed and we were unable to recover it. 00:28:13.413 [2024-07-24 22:15:52.351512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.413 [2024-07-24 22:15:52.351552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.413 qpair failed and we were unable to recover it. 00:28:13.413 [2024-07-24 22:15:52.351917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.413 [2024-07-24 22:15:52.351958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.413 qpair failed and we were unable to recover it. 00:28:13.413 [2024-07-24 22:15:52.352330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.413 [2024-07-24 22:15:52.352370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.413 qpair failed and we were unable to recover it. 00:28:13.413 [2024-07-24 22:15:52.352739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.413 [2024-07-24 22:15:52.352781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.413 qpair failed and we were unable to recover it. 00:28:13.413 [2024-07-24 22:15:52.353165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.413 [2024-07-24 22:15:52.353205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.413 qpair failed and we were unable to recover it. 00:28:13.413 [2024-07-24 22:15:52.353605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.413 [2024-07-24 22:15:52.353646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.413 qpair failed and we were unable to recover it. 00:28:13.413 [2024-07-24 22:15:52.354032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.413 [2024-07-24 22:15:52.354073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.413 qpair failed and we were unable to recover it. 00:28:13.413 [2024-07-24 22:15:52.354402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.413 [2024-07-24 22:15:52.354442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.413 qpair failed and we were unable to recover it. 00:28:13.413 [2024-07-24 22:15:52.354757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.413 [2024-07-24 22:15:52.354798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.413 qpair failed and we were unable to recover it. 00:28:13.413 [2024-07-24 22:15:52.355159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.413 [2024-07-24 22:15:52.355200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.413 qpair failed and we were unable to recover it. 00:28:13.413 [2024-07-24 22:15:52.355553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.413 [2024-07-24 22:15:52.355594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.413 qpair failed and we were unable to recover it. 00:28:13.413 [2024-07-24 22:15:52.355987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.413 [2024-07-24 22:15:52.356040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.413 qpair failed and we were unable to recover it. 00:28:13.413 [2024-07-24 22:15:52.356298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.413 [2024-07-24 22:15:52.356311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.413 qpair failed and we were unable to recover it. 00:28:13.413 [2024-07-24 22:15:52.356704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.413 [2024-07-24 22:15:52.356757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.413 qpair failed and we were unable to recover it. 00:28:13.413 [2024-07-24 22:15:52.357119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.413 [2024-07-24 22:15:52.357160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.413 qpair failed and we were unable to recover it. 00:28:13.413 [2024-07-24 22:15:52.357551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.413 [2024-07-24 22:15:52.357591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.413 qpair failed and we were unable to recover it. 00:28:13.413 [2024-07-24 22:15:52.357922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.413 [2024-07-24 22:15:52.357964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.413 qpair failed and we were unable to recover it. 00:28:13.413 [2024-07-24 22:15:52.358233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.413 [2024-07-24 22:15:52.358246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.413 qpair failed and we were unable to recover it. 00:28:13.413 [2024-07-24 22:15:52.358519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.413 [2024-07-24 22:15:52.358554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.413 qpair failed and we were unable to recover it. 00:28:13.413 [2024-07-24 22:15:52.358858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.413 [2024-07-24 22:15:52.358899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.413 qpair failed and we were unable to recover it. 00:28:13.413 [2024-07-24 22:15:52.359238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.413 [2024-07-24 22:15:52.359278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.413 qpair failed and we were unable to recover it. 00:28:13.413 [2024-07-24 22:15:52.359688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.413 [2024-07-24 22:15:52.359738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.413 qpair failed and we were unable to recover it. 00:28:13.413 [2024-07-24 22:15:52.360066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.414 [2024-07-24 22:15:52.360106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.414 qpair failed and we were unable to recover it. 00:28:13.414 [2024-07-24 22:15:52.360488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.414 [2024-07-24 22:15:52.360528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.414 qpair failed and we were unable to recover it. 00:28:13.414 [2024-07-24 22:15:52.360937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.414 [2024-07-24 22:15:52.360978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.414 qpair failed and we were unable to recover it. 00:28:13.414 [2024-07-24 22:15:52.361373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.414 [2024-07-24 22:15:52.361413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.414 qpair failed and we were unable to recover it. 00:28:13.414 [2024-07-24 22:15:52.361799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.414 [2024-07-24 22:15:52.361840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.414 qpair failed and we were unable to recover it. 00:28:13.414 [2024-07-24 22:15:52.362153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.414 [2024-07-24 22:15:52.362194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.414 qpair failed and we were unable to recover it. 00:28:13.414 [2024-07-24 22:15:52.362492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.414 [2024-07-24 22:15:52.362538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.414 qpair failed and we were unable to recover it. 00:28:13.414 [2024-07-24 22:15:52.362909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.414 [2024-07-24 22:15:52.362950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.414 qpair failed and we were unable to recover it. 00:28:13.414 [2024-07-24 22:15:52.363336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.414 [2024-07-24 22:15:52.363377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.414 qpair failed and we were unable to recover it. 00:28:13.414 [2024-07-24 22:15:52.363689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.414 [2024-07-24 22:15:52.363740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.414 qpair failed and we were unable to recover it. 00:28:13.414 [2024-07-24 22:15:52.364125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.414 [2024-07-24 22:15:52.364165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.414 qpair failed and we were unable to recover it. 00:28:13.414 [2024-07-24 22:15:52.364486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.414 [2024-07-24 22:15:52.364527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.414 qpair failed and we were unable to recover it. 00:28:13.414 [2024-07-24 22:15:52.364933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.414 [2024-07-24 22:15:52.364974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.414 qpair failed and we were unable to recover it. 00:28:13.414 [2024-07-24 22:15:52.365288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.414 [2024-07-24 22:15:52.365328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.414 qpair failed and we were unable to recover it. 00:28:13.414 [2024-07-24 22:15:52.365713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.414 [2024-07-24 22:15:52.365781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.414 qpair failed and we were unable to recover it. 00:28:13.414 [2024-07-24 22:15:52.366171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.414 [2024-07-24 22:15:52.366212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.414 qpair failed and we were unable to recover it. 00:28:13.414 [2024-07-24 22:15:52.366515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.414 [2024-07-24 22:15:52.366527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.414 qpair failed and we were unable to recover it. 00:28:13.414 [2024-07-24 22:15:52.366858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.414 [2024-07-24 22:15:52.366899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.414 qpair failed and we were unable to recover it. 00:28:13.414 [2024-07-24 22:15:52.367287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.414 [2024-07-24 22:15:52.367328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.414 qpair failed and we were unable to recover it. 00:28:13.414 [2024-07-24 22:15:52.367732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.414 [2024-07-24 22:15:52.367773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.414 qpair failed and we were unable to recover it. 00:28:13.414 [2024-07-24 22:15:52.368163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.414 [2024-07-24 22:15:52.368204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.414 qpair failed and we were unable to recover it. 00:28:13.414 [2024-07-24 22:15:52.368501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.414 [2024-07-24 22:15:52.368545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.414 qpair failed and we were unable to recover it. 00:28:13.414 [2024-07-24 22:15:52.368878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.414 [2024-07-24 22:15:52.368919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.414 qpair failed and we were unable to recover it. 00:28:13.414 [2024-07-24 22:15:52.369321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.414 [2024-07-24 22:15:52.369361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.414 qpair failed and we were unable to recover it. 00:28:13.414 [2024-07-24 22:15:52.369742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.414 [2024-07-24 22:15:52.369782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.414 qpair failed and we were unable to recover it. 00:28:13.414 [2024-07-24 22:15:52.370166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.414 [2024-07-24 22:15:52.370206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.414 qpair failed and we were unable to recover it. 00:28:13.414 [2024-07-24 22:15:52.370575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.414 [2024-07-24 22:15:52.370615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.414 qpair failed and we were unable to recover it. 00:28:13.414 [2024-07-24 22:15:52.371003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.414 [2024-07-24 22:15:52.371044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.414 qpair failed and we were unable to recover it. 00:28:13.414 [2024-07-24 22:15:52.371426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.414 [2024-07-24 22:15:52.371466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.414 qpair failed and we were unable to recover it. 00:28:13.414 [2024-07-24 22:15:52.371856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.414 [2024-07-24 22:15:52.371897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.414 qpair failed and we were unable to recover it. 00:28:13.414 [2024-07-24 22:15:52.372287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.414 [2024-07-24 22:15:52.372327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.414 qpair failed and we were unable to recover it. 00:28:13.414 [2024-07-24 22:15:52.372648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.414 [2024-07-24 22:15:52.372660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.414 qpair failed and we were unable to recover it. 00:28:13.414 [2024-07-24 22:15:52.372993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.414 [2024-07-24 22:15:52.373035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.414 qpair failed and we were unable to recover it. 00:28:13.414 [2024-07-24 22:15:52.373418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.414 [2024-07-24 22:15:52.373464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.414 qpair failed and we were unable to recover it. 00:28:13.414 [2024-07-24 22:15:52.373851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.414 [2024-07-24 22:15:52.373892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.414 qpair failed and we were unable to recover it. 00:28:13.414 [2024-07-24 22:15:52.374211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.414 [2024-07-24 22:15:52.374252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.414 qpair failed and we were unable to recover it. 00:28:13.414 [2024-07-24 22:15:52.374551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.414 [2024-07-24 22:15:52.374591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.414 qpair failed and we were unable to recover it. 00:28:13.415 [2024-07-24 22:15:52.374898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.415 [2024-07-24 22:15:52.374939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.415 qpair failed and we were unable to recover it. 00:28:13.415 [2024-07-24 22:15:52.375235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.415 [2024-07-24 22:15:52.375247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.415 qpair failed and we were unable to recover it. 00:28:13.415 [2024-07-24 22:15:52.375565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.415 [2024-07-24 22:15:52.375606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.415 qpair failed and we were unable to recover it. 00:28:13.415 [2024-07-24 22:15:52.375910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.415 [2024-07-24 22:15:52.375951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.415 qpair failed and we were unable to recover it. 00:28:13.415 [2024-07-24 22:15:52.376333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.415 [2024-07-24 22:15:52.376373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.415 qpair failed and we were unable to recover it. 00:28:13.415 [2024-07-24 22:15:52.376757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.415 [2024-07-24 22:15:52.376798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.415 qpair failed and we were unable to recover it. 00:28:13.415 [2024-07-24 22:15:52.377114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.415 [2024-07-24 22:15:52.377155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.415 qpair failed and we were unable to recover it. 00:28:13.415 [2024-07-24 22:15:52.377391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.415 [2024-07-24 22:15:52.377430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.415 qpair failed and we were unable to recover it. 00:28:13.415 [2024-07-24 22:15:52.377722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.415 [2024-07-24 22:15:52.377735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.415 qpair failed and we were unable to recover it. 00:28:13.415 [2024-07-24 22:15:52.378068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.415 [2024-07-24 22:15:52.378109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.415 qpair failed and we were unable to recover it. 00:28:13.415 [2024-07-24 22:15:52.378505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.415 [2024-07-24 22:15:52.378546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.415 qpair failed and we were unable to recover it. 00:28:13.415 [2024-07-24 22:15:52.378884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.415 [2024-07-24 22:15:52.378925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.415 qpair failed and we were unable to recover it. 00:28:13.415 [2024-07-24 22:15:52.379303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.415 [2024-07-24 22:15:52.379343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.415 qpair failed and we were unable to recover it. 00:28:13.415 [2024-07-24 22:15:52.379720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.415 [2024-07-24 22:15:52.379733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.415 qpair failed and we were unable to recover it. 00:28:13.415 [2024-07-24 22:15:52.380005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.415 [2024-07-24 22:15:52.380017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.415 qpair failed and we were unable to recover it. 00:28:13.415 [2024-07-24 22:15:52.380296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.415 [2024-07-24 22:15:52.380336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.415 qpair failed and we were unable to recover it. 00:28:13.415 [2024-07-24 22:15:52.380730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.415 [2024-07-24 22:15:52.380771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.415 qpair failed and we were unable to recover it. 00:28:13.415 [2024-07-24 22:15:52.381055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.415 [2024-07-24 22:15:52.381095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.415 qpair failed and we were unable to recover it. 00:28:13.415 [2024-07-24 22:15:52.381493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.415 [2024-07-24 22:15:52.381533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.415 qpair failed and we were unable to recover it. 00:28:13.415 [2024-07-24 22:15:52.381895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.415 [2024-07-24 22:15:52.381937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.415 qpair failed and we were unable to recover it. 00:28:13.415 [2024-07-24 22:15:52.382326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.415 [2024-07-24 22:15:52.382366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.415 qpair failed and we were unable to recover it. 00:28:13.415 [2024-07-24 22:15:52.382648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.415 [2024-07-24 22:15:52.382689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.415 qpair failed and we were unable to recover it. 00:28:13.416 [2024-07-24 22:15:52.383086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.416 [2024-07-24 22:15:52.383127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.416 qpair failed and we were unable to recover it. 00:28:13.416 [2024-07-24 22:15:52.383444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.416 [2024-07-24 22:15:52.383457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.416 qpair failed and we were unable to recover it. 00:28:13.416 [2024-07-24 22:15:52.383808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.416 [2024-07-24 22:15:52.383850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.416 qpair failed and we were unable to recover it. 00:28:13.416 [2024-07-24 22:15:52.384236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.416 [2024-07-24 22:15:52.384275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.416 qpair failed and we were unable to recover it. 00:28:13.416 [2024-07-24 22:15:52.384643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.416 [2024-07-24 22:15:52.384684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.416 qpair failed and we were unable to recover it. 00:28:13.416 [2024-07-24 22:15:52.384985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.416 [2024-07-24 22:15:52.385025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.416 qpair failed and we were unable to recover it. 00:28:13.416 [2024-07-24 22:15:52.385420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.416 [2024-07-24 22:15:52.385460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.416 qpair failed and we were unable to recover it. 00:28:13.416 [2024-07-24 22:15:52.385823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.416 [2024-07-24 22:15:52.385863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.416 qpair failed and we were unable to recover it. 00:28:13.416 [2024-07-24 22:15:52.386233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.416 [2024-07-24 22:15:52.386273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.416 qpair failed and we were unable to recover it. 00:28:13.416 [2024-07-24 22:15:52.386637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.416 [2024-07-24 22:15:52.386677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.416 qpair failed and we were unable to recover it. 00:28:13.416 [2024-07-24 22:15:52.387079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.416 [2024-07-24 22:15:52.387119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.416 qpair failed and we were unable to recover it. 00:28:13.416 [2024-07-24 22:15:52.387536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.416 [2024-07-24 22:15:52.387548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.416 qpair failed and we were unable to recover it. 00:28:13.416 [2024-07-24 22:15:52.387747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.416 [2024-07-24 22:15:52.387760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.416 qpair failed and we were unable to recover it. 00:28:13.416 [2024-07-24 22:15:52.387945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.416 [2024-07-24 22:15:52.387957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.416 qpair failed and we were unable to recover it. 00:28:13.416 [2024-07-24 22:15:52.388293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.416 [2024-07-24 22:15:52.388338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.416 qpair failed and we were unable to recover it. 00:28:13.416 [2024-07-24 22:15:52.388734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.416 [2024-07-24 22:15:52.388775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.416 qpair failed and we were unable to recover it. 00:28:13.416 [2024-07-24 22:15:52.389165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.416 [2024-07-24 22:15:52.389206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.416 qpair failed and we were unable to recover it. 00:28:13.416 [2024-07-24 22:15:52.389584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.416 [2024-07-24 22:15:52.389635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.416 qpair failed and we were unable to recover it. 00:28:13.416 [2024-07-24 22:15:52.390016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.416 [2024-07-24 22:15:52.390056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.416 qpair failed and we were unable to recover it. 00:28:13.416 [2024-07-24 22:15:52.390355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.416 [2024-07-24 22:15:52.390368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.416 qpair failed and we were unable to recover it. 00:28:13.416 [2024-07-24 22:15:52.390733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.416 [2024-07-24 22:15:52.390774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.416 qpair failed and we were unable to recover it. 00:28:13.416 [2024-07-24 22:15:52.391078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.416 [2024-07-24 22:15:52.391118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.416 qpair failed and we were unable to recover it. 00:28:13.416 [2024-07-24 22:15:52.391398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.416 [2024-07-24 22:15:52.391410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.416 qpair failed and we were unable to recover it. 00:28:13.416 [2024-07-24 22:15:52.391658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.416 [2024-07-24 22:15:52.391670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.416 qpair failed and we were unable to recover it. 00:28:13.416 [2024-07-24 22:15:52.391977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.416 [2024-07-24 22:15:52.391990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.416 qpair failed and we were unable to recover it. 00:28:13.416 [2024-07-24 22:15:52.392164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.416 [2024-07-24 22:15:52.392177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.416 qpair failed and we were unable to recover it. 00:28:13.416 [2024-07-24 22:15:52.392488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.416 [2024-07-24 22:15:52.392528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.416 qpair failed and we were unable to recover it. 00:28:13.416 [2024-07-24 22:15:52.392914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.416 [2024-07-24 22:15:52.392955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.416 qpair failed and we were unable to recover it. 00:28:13.416 [2024-07-24 22:15:52.393351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.416 [2024-07-24 22:15:52.393392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.416 qpair failed and we were unable to recover it. 00:28:13.416 [2024-07-24 22:15:52.393777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.416 [2024-07-24 22:15:52.393819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.416 qpair failed and we were unable to recover it. 00:28:13.416 [2024-07-24 22:15:52.394207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.416 [2024-07-24 22:15:52.394247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.416 qpair failed and we were unable to recover it. 00:28:13.416 [2024-07-24 22:15:52.394624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.416 [2024-07-24 22:15:52.394664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.416 qpair failed and we were unable to recover it. 00:28:13.416 [2024-07-24 22:15:52.394977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.416 [2024-07-24 22:15:52.395018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.416 qpair failed and we were unable to recover it. 00:28:13.416 [2024-07-24 22:15:52.395389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.416 [2024-07-24 22:15:52.395429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.416 qpair failed and we were unable to recover it. 00:28:13.416 [2024-07-24 22:15:52.395796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.416 [2024-07-24 22:15:52.395838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.416 qpair failed and we were unable to recover it. 00:28:13.416 [2024-07-24 22:15:52.396149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.416 [2024-07-24 22:15:52.396188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.416 qpair failed and we were unable to recover it. 00:28:13.416 [2024-07-24 22:15:52.396572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.416 [2024-07-24 22:15:52.396624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.416 qpair failed and we were unable to recover it. 00:28:13.417 [2024-07-24 22:15:52.396926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.417 [2024-07-24 22:15:52.396967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.417 qpair failed and we were unable to recover it. 00:28:13.417 [2024-07-24 22:15:52.397292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.417 [2024-07-24 22:15:52.397316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.417 qpair failed and we were unable to recover it. 00:28:13.417 [2024-07-24 22:15:52.397640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.417 [2024-07-24 22:15:52.397652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.417 qpair failed and we were unable to recover it. 00:28:13.417 [2024-07-24 22:15:52.397997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.417 [2024-07-24 22:15:52.398011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.417 qpair failed and we were unable to recover it. 00:28:13.417 [2024-07-24 22:15:52.398368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.417 [2024-07-24 22:15:52.398408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.417 qpair failed and we were unable to recover it. 00:28:13.417 [2024-07-24 22:15:52.398811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.417 [2024-07-24 22:15:52.398853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.417 qpair failed and we were unable to recover it. 00:28:13.417 [2024-07-24 22:15:52.399215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.417 [2024-07-24 22:15:52.399256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.417 qpair failed and we were unable to recover it. 00:28:13.417 [2024-07-24 22:15:52.399643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.417 [2024-07-24 22:15:52.399683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.417 qpair failed and we were unable to recover it. 00:28:13.417 [2024-07-24 22:15:52.400012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.417 [2024-07-24 22:15:52.400053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.417 qpair failed and we were unable to recover it. 00:28:13.417 [2024-07-24 22:15:52.400438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.417 [2024-07-24 22:15:52.400477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.417 qpair failed and we were unable to recover it. 00:28:13.417 [2024-07-24 22:15:52.400842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.417 [2024-07-24 22:15:52.400883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.417 qpair failed and we were unable to recover it. 00:28:13.417 [2024-07-24 22:15:52.401273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.417 [2024-07-24 22:15:52.401314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.417 qpair failed and we were unable to recover it. 00:28:13.417 [2024-07-24 22:15:52.401673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.417 [2024-07-24 22:15:52.401685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.417 qpair failed and we were unable to recover it. 00:28:13.417 [2024-07-24 22:15:52.401997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.417 [2024-07-24 22:15:52.402038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.417 qpair failed and we were unable to recover it. 00:28:13.417 [2024-07-24 22:15:52.402403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.417 [2024-07-24 22:15:52.402444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.417 qpair failed and we were unable to recover it. 00:28:13.417 [2024-07-24 22:15:52.402750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.417 [2024-07-24 22:15:52.402762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.417 qpair failed and we were unable to recover it. 00:28:13.417 [2024-07-24 22:15:52.403002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.417 [2024-07-24 22:15:52.403014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.417 qpair failed and we were unable to recover it. 00:28:13.417 [2024-07-24 22:15:52.403286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.417 [2024-07-24 22:15:52.403340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.417 qpair failed and we were unable to recover it. 00:28:13.417 [2024-07-24 22:15:52.403657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.417 [2024-07-24 22:15:52.403697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.417 qpair failed and we were unable to recover it. 00:28:13.417 [2024-07-24 22:15:52.404028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.417 [2024-07-24 22:15:52.404069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.417 qpair failed and we were unable to recover it. 00:28:13.417 [2024-07-24 22:15:52.404455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.417 [2024-07-24 22:15:52.404494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.417 qpair failed and we were unable to recover it. 00:28:13.417 [2024-07-24 22:15:52.404847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.417 [2024-07-24 22:15:52.404860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.417 qpair failed and we were unable to recover it. 00:28:13.417 [2024-07-24 22:15:52.405046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.417 [2024-07-24 22:15:52.405058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.417 qpair failed and we were unable to recover it. 00:28:13.417 [2024-07-24 22:15:52.405284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.417 [2024-07-24 22:15:52.405297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.417 qpair failed and we were unable to recover it. 00:28:13.417 [2024-07-24 22:15:52.405626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.417 [2024-07-24 22:15:52.405666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.417 qpair failed and we were unable to recover it. 00:28:13.417 [2024-07-24 22:15:52.405980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.417 [2024-07-24 22:15:52.406021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.417 qpair failed and we were unable to recover it. 00:28:13.417 [2024-07-24 22:15:52.406410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.417 [2024-07-24 22:15:52.406451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.417 qpair failed and we were unable to recover it. 00:28:13.417 [2024-07-24 22:15:52.406768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.417 [2024-07-24 22:15:52.406809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.417 qpair failed and we were unable to recover it. 00:28:13.417 [2024-07-24 22:15:52.407192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.417 [2024-07-24 22:15:52.407233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.417 qpair failed and we were unable to recover it. 00:28:13.417 [2024-07-24 22:15:52.407617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.417 [2024-07-24 22:15:52.407656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.417 qpair failed and we were unable to recover it. 00:28:13.417 [2024-07-24 22:15:52.408065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.417 [2024-07-24 22:15:52.408106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.417 qpair failed and we were unable to recover it. 00:28:13.417 [2024-07-24 22:15:52.408499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.417 [2024-07-24 22:15:52.408540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.417 qpair failed and we were unable to recover it. 00:28:13.417 [2024-07-24 22:15:52.408931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.417 [2024-07-24 22:15:52.408972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.417 qpair failed and we were unable to recover it. 00:28:13.417 [2024-07-24 22:15:52.409336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.417 [2024-07-24 22:15:52.409376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.417 qpair failed and we were unable to recover it. 00:28:13.417 [2024-07-24 22:15:52.409777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.417 [2024-07-24 22:15:52.409818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.417 qpair failed and we were unable to recover it. 00:28:13.417 [2024-07-24 22:15:52.410188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.417 [2024-07-24 22:15:52.410228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.417 qpair failed and we were unable to recover it. 00:28:13.417 [2024-07-24 22:15:52.410617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.417 [2024-07-24 22:15:52.410629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.417 qpair failed and we were unable to recover it. 00:28:13.417 [2024-07-24 22:15:52.410892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.418 [2024-07-24 22:15:52.410905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.418 qpair failed and we were unable to recover it. 00:28:13.418 [2024-07-24 22:15:52.411245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.418 [2024-07-24 22:15:52.411258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.418 qpair failed and we were unable to recover it. 00:28:13.418 [2024-07-24 22:15:52.411434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.418 [2024-07-24 22:15:52.411447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.418 qpair failed and we were unable to recover it. 00:28:13.418 [2024-07-24 22:15:52.411672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.418 [2024-07-24 22:15:52.411685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.418 qpair failed and we were unable to recover it. 00:28:13.418 [2024-07-24 22:15:52.411958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.418 [2024-07-24 22:15:52.411971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.418 qpair failed and we were unable to recover it. 00:28:13.418 [2024-07-24 22:15:52.412289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.418 [2024-07-24 22:15:52.412302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.418 qpair failed and we were unable to recover it. 00:28:13.418 [2024-07-24 22:15:52.412556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.418 [2024-07-24 22:15:52.412569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.418 qpair failed and we were unable to recover it. 00:28:13.418 [2024-07-24 22:15:52.412729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.418 [2024-07-24 22:15:52.412742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.418 qpair failed and we were unable to recover it. 00:28:13.418 [2024-07-24 22:15:52.413062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.418 [2024-07-24 22:15:52.413075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.418 qpair failed and we were unable to recover it. 00:28:13.418 [2024-07-24 22:15:52.413423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.418 [2024-07-24 22:15:52.413436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.418 qpair failed and we were unable to recover it. 00:28:13.418 [2024-07-24 22:15:52.413787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.418 [2024-07-24 22:15:52.413801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.418 qpair failed and we were unable to recover it. 00:28:13.418 [2024-07-24 22:15:52.414056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.418 [2024-07-24 22:15:52.414069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.418 qpair failed and we were unable to recover it. 00:28:13.418 [2024-07-24 22:15:52.414317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.418 [2024-07-24 22:15:52.414330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.418 qpair failed and we were unable to recover it. 00:28:13.418 [2024-07-24 22:15:52.414655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.418 [2024-07-24 22:15:52.414668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.418 qpair failed and we were unable to recover it. 00:28:13.418 [2024-07-24 22:15:52.414978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.418 [2024-07-24 22:15:52.414991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.418 qpair failed and we were unable to recover it. 00:28:13.418 [2024-07-24 22:15:52.415314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.418 [2024-07-24 22:15:52.415327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.418 qpair failed and we were unable to recover it. 00:28:13.418 [2024-07-24 22:15:52.415569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.418 [2024-07-24 22:15:52.415582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.418 qpair failed and we were unable to recover it. 00:28:13.418 [2024-07-24 22:15:52.415907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.418 [2024-07-24 22:15:52.415920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.418 qpair failed and we were unable to recover it. 00:28:13.418 [2024-07-24 22:15:52.416174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.418 [2024-07-24 22:15:52.416187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.418 qpair failed and we were unable to recover it. 00:28:13.418 [2024-07-24 22:15:52.416429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.418 [2024-07-24 22:15:52.416442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.418 qpair failed and we were unable to recover it. 00:28:13.418 [2024-07-24 22:15:52.416675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.418 [2024-07-24 22:15:52.416690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.418 qpair failed and we were unable to recover it. 00:28:13.418 [2024-07-24 22:15:52.416997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.418 [2024-07-24 22:15:52.417011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.418 qpair failed and we were unable to recover it. 00:28:13.418 [2024-07-24 22:15:52.417265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.418 [2024-07-24 22:15:52.417278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.418 qpair failed and we were unable to recover it. 00:28:13.418 [2024-07-24 22:15:52.417601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.418 [2024-07-24 22:15:52.417614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.418 qpair failed and we were unable to recover it. 00:28:13.418 [2024-07-24 22:15:52.417787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.418 [2024-07-24 22:15:52.417800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.418 qpair failed and we were unable to recover it. 00:28:13.418 [2024-07-24 22:15:52.418029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.418 [2024-07-24 22:15:52.418042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.418 qpair failed and we were unable to recover it. 00:28:13.418 [2024-07-24 22:15:52.418297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.418 [2024-07-24 22:15:52.418310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.418 qpair failed and we were unable to recover it. 00:28:13.418 [2024-07-24 22:15:52.418559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.418 [2024-07-24 22:15:52.418571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.418 qpair failed and we were unable to recover it. 00:28:13.418 [2024-07-24 22:15:52.418800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.418 [2024-07-24 22:15:52.418813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.418 qpair failed and we were unable to recover it. 00:28:13.418 [2024-07-24 22:15:52.419074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.418 [2024-07-24 22:15:52.419087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.418 qpair failed and we were unable to recover it. 00:28:13.418 [2024-07-24 22:15:52.419434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.418 [2024-07-24 22:15:52.419447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.418 qpair failed and we were unable to recover it. 00:28:13.418 [2024-07-24 22:15:52.419720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.418 [2024-07-24 22:15:52.419733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.418 qpair failed and we were unable to recover it. 00:28:13.418 [2024-07-24 22:15:52.420061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.418 [2024-07-24 22:15:52.420074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.418 qpair failed and we were unable to recover it. 00:28:13.418 [2024-07-24 22:15:52.420329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.418 [2024-07-24 22:15:52.420341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.418 qpair failed and we were unable to recover it. 00:28:13.418 [2024-07-24 22:15:52.420604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.418 [2024-07-24 22:15:52.420617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.418 qpair failed and we were unable to recover it. 00:28:13.418 [2024-07-24 22:15:52.420870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.418 [2024-07-24 22:15:52.420883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.418 qpair failed and we were unable to recover it. 00:28:13.418 [2024-07-24 22:15:52.421131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.418 [2024-07-24 22:15:52.421143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.418 qpair failed and we were unable to recover it. 00:28:13.418 [2024-07-24 22:15:52.421468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.418 [2024-07-24 22:15:52.421482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.419 qpair failed and we were unable to recover it. 00:28:13.419 [2024-07-24 22:15:52.421782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.419 [2024-07-24 22:15:52.421795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.419 qpair failed and we were unable to recover it. 00:28:13.419 [2024-07-24 22:15:52.422119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.419 [2024-07-24 22:15:52.422132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.419 qpair failed and we were unable to recover it. 00:28:13.419 [2024-07-24 22:15:52.422397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.419 [2024-07-24 22:15:52.422409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.419 qpair failed and we were unable to recover it. 00:28:13.419 [2024-07-24 22:15:52.422747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.419 [2024-07-24 22:15:52.422760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.419 qpair failed and we were unable to recover it. 00:28:13.419 [2024-07-24 22:15:52.423117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.419 [2024-07-24 22:15:52.423130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.419 qpair failed and we were unable to recover it. 00:28:13.419 [2024-07-24 22:15:52.423433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.419 [2024-07-24 22:15:52.423445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.419 qpair failed and we were unable to recover it. 00:28:13.419 [2024-07-24 22:15:52.423799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.419 [2024-07-24 22:15:52.423812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.419 qpair failed and we were unable to recover it. 00:28:13.419 [2024-07-24 22:15:52.423975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.419 [2024-07-24 22:15:52.423988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.419 qpair failed and we were unable to recover it. 00:28:13.419 [2024-07-24 22:15:52.424264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.419 [2024-07-24 22:15:52.424277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.419 qpair failed and we were unable to recover it. 00:28:13.419 [2024-07-24 22:15:52.424603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.419 [2024-07-24 22:15:52.424616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.419 qpair failed and we were unable to recover it. 00:28:13.419 [2024-07-24 22:15:52.424890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.419 [2024-07-24 22:15:52.424903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.419 qpair failed and we were unable to recover it. 00:28:13.419 [2024-07-24 22:15:52.425232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.419 [2024-07-24 22:15:52.425245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.419 qpair failed and we were unable to recover it. 00:28:13.419 [2024-07-24 22:15:52.425602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.419 [2024-07-24 22:15:52.425615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.419 qpair failed and we were unable to recover it. 00:28:13.419 [2024-07-24 22:15:52.425915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.419 [2024-07-24 22:15:52.425928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.419 qpair failed and we were unable to recover it. 00:28:13.419 [2024-07-24 22:15:52.426247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.419 [2024-07-24 22:15:52.426260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.419 qpair failed and we were unable to recover it. 00:28:13.419 [2024-07-24 22:15:52.426578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.419 [2024-07-24 22:15:52.426591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.419 qpair failed and we were unable to recover it. 00:28:13.419 [2024-07-24 22:15:52.426939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.419 [2024-07-24 22:15:52.426952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.419 qpair failed and we were unable to recover it. 00:28:13.419 [2024-07-24 22:15:52.427294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.419 [2024-07-24 22:15:52.427307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.419 qpair failed and we were unable to recover it. 00:28:13.419 [2024-07-24 22:15:52.427654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.419 [2024-07-24 22:15:52.427667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.419 qpair failed and we were unable to recover it. 00:28:13.419 [2024-07-24 22:15:52.427970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.419 [2024-07-24 22:15:52.427983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.419 qpair failed and we were unable to recover it. 00:28:13.419 [2024-07-24 22:15:52.428242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.419 [2024-07-24 22:15:52.428255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.419 qpair failed and we were unable to recover it. 00:28:13.419 [2024-07-24 22:15:52.428514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.419 [2024-07-24 22:15:52.428527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.419 qpair failed and we were unable to recover it. 00:28:13.419 [2024-07-24 22:15:52.428701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.419 [2024-07-24 22:15:52.428725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.419 qpair failed and we were unable to recover it. 00:28:13.419 [2024-07-24 22:15:52.429047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.419 [2024-07-24 22:15:52.429060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.419 qpair failed and we were unable to recover it. 00:28:13.419 [2024-07-24 22:15:52.429409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.419 [2024-07-24 22:15:52.429422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.419 qpair failed and we were unable to recover it. 00:28:13.419 [2024-07-24 22:15:52.429601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.419 [2024-07-24 22:15:52.429613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.419 qpair failed and we were unable to recover it. 00:28:13.419 [2024-07-24 22:15:52.429959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.419 [2024-07-24 22:15:52.429972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.419 qpair failed and we were unable to recover it. 00:28:13.419 [2024-07-24 22:15:52.430236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.419 [2024-07-24 22:15:52.430248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.419 qpair failed and we were unable to recover it. 00:28:13.419 [2024-07-24 22:15:52.430517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.419 [2024-07-24 22:15:52.430530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.419 qpair failed and we were unable to recover it. 00:28:13.419 [2024-07-24 22:15:52.430779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.419 [2024-07-24 22:15:52.430792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.419 qpair failed and we were unable to recover it. 00:28:13.419 [2024-07-24 22:15:52.431099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.419 [2024-07-24 22:15:52.431112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.419 qpair failed and we were unable to recover it. 00:28:13.419 [2024-07-24 22:15:52.431483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.419 [2024-07-24 22:15:52.431496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.419 qpair failed and we were unable to recover it. 00:28:13.419 [2024-07-24 22:15:52.431821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.419 [2024-07-24 22:15:52.431834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.419 qpair failed and we were unable to recover it. 00:28:13.419 [2024-07-24 22:15:52.431993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.419 [2024-07-24 22:15:52.432006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.419 qpair failed and we were unable to recover it. 00:28:13.419 [2024-07-24 22:15:52.432329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.419 [2024-07-24 22:15:52.432342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.419 qpair failed and we were unable to recover it. 00:28:13.419 [2024-07-24 22:15:52.432692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.419 [2024-07-24 22:15:52.432704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.419 qpair failed and we were unable to recover it. 00:28:13.419 [2024-07-24 22:15:52.432944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.419 [2024-07-24 22:15:52.432957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.420 qpair failed and we were unable to recover it. 00:28:13.420 [2024-07-24 22:15:52.433291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.420 [2024-07-24 22:15:52.433304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.420 qpair failed and we were unable to recover it. 00:28:13.420 [2024-07-24 22:15:52.433650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.420 [2024-07-24 22:15:52.433663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.420 qpair failed and we were unable to recover it. 00:28:13.420 [2024-07-24 22:15:52.433893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.420 [2024-07-24 22:15:52.433906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.420 qpair failed and we were unable to recover it. 00:28:13.420 [2024-07-24 22:15:52.434145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.420 [2024-07-24 22:15:52.434157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.420 qpair failed and we were unable to recover it. 00:28:13.420 [2024-07-24 22:15:52.434478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.420 [2024-07-24 22:15:52.434491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.420 qpair failed and we were unable to recover it. 00:28:13.420 [2024-07-24 22:15:52.434746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.420 [2024-07-24 22:15:52.434759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.420 qpair failed and we were unable to recover it. 00:28:13.420 [2024-07-24 22:15:52.435058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.420 [2024-07-24 22:15:52.435070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.420 qpair failed and we were unable to recover it. 00:28:13.420 [2024-07-24 22:15:52.435334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.420 [2024-07-24 22:15:52.435347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.420 qpair failed and we were unable to recover it. 00:28:13.420 [2024-07-24 22:15:52.435671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.420 [2024-07-24 22:15:52.435684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.420 qpair failed and we were unable to recover it. 00:28:13.420 [2024-07-24 22:15:52.436008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.420 [2024-07-24 22:15:52.436021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.420 qpair failed and we were unable to recover it. 00:28:13.420 [2024-07-24 22:15:52.436389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.420 [2024-07-24 22:15:52.436401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.420 qpair failed and we were unable to recover it. 00:28:13.420 [2024-07-24 22:15:52.436730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.420 [2024-07-24 22:15:52.436743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.420 qpair failed and we were unable to recover it. 00:28:13.420 [2024-07-24 22:15:52.436988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.420 [2024-07-24 22:15:52.437001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.420 qpair failed and we were unable to recover it. 00:28:13.420 [2024-07-24 22:15:52.437163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.420 [2024-07-24 22:15:52.437175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.420 qpair failed and we were unable to recover it. 00:28:13.420 [2024-07-24 22:15:52.437414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.420 [2024-07-24 22:15:52.437427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.420 qpair failed and we were unable to recover it. 00:28:13.420 [2024-07-24 22:15:52.437731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.420 [2024-07-24 22:15:52.437744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.420 qpair failed and we were unable to recover it. 00:28:13.420 [2024-07-24 22:15:52.438068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.420 [2024-07-24 22:15:52.438080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.420 qpair failed and we were unable to recover it. 00:28:13.420 [2024-07-24 22:15:52.438315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.420 [2024-07-24 22:15:52.438328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.420 qpair failed and we were unable to recover it. 00:28:13.420 [2024-07-24 22:15:52.438647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.420 [2024-07-24 22:15:52.438660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.420 qpair failed and we were unable to recover it. 00:28:13.420 [2024-07-24 22:15:52.438943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.420 [2024-07-24 22:15:52.438957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.420 qpair failed and we were unable to recover it. 00:28:13.420 [2024-07-24 22:15:52.439143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.420 [2024-07-24 22:15:52.439155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.420 qpair failed and we were unable to recover it. 00:28:13.420 [2024-07-24 22:15:52.439487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.420 [2024-07-24 22:15:52.439499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.420 qpair failed and we were unable to recover it. 00:28:13.420 [2024-07-24 22:15:52.439813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.420 [2024-07-24 22:15:52.439826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.420 qpair failed and we were unable to recover it. 00:28:13.420 [2024-07-24 22:15:52.440090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.420 [2024-07-24 22:15:52.440103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.420 qpair failed and we were unable to recover it. 00:28:13.420 [2024-07-24 22:15:52.440279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.420 [2024-07-24 22:15:52.440291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.420 qpair failed and we were unable to recover it. 00:28:13.420 [2024-07-24 22:15:52.440594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.420 [2024-07-24 22:15:52.440608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.420 qpair failed and we were unable to recover it. 00:28:13.420 [2024-07-24 22:15:52.440877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.420 [2024-07-24 22:15:52.440890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.420 qpair failed and we were unable to recover it. 00:28:13.420 [2024-07-24 22:15:52.441165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.420 [2024-07-24 22:15:52.441206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.420 qpair failed and we were unable to recover it. 00:28:13.420 [2024-07-24 22:15:52.441589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.420 [2024-07-24 22:15:52.441628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.420 qpair failed and we were unable to recover it. 00:28:13.420 [2024-07-24 22:15:52.441944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.420 [2024-07-24 22:15:52.441986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.420 qpair failed and we were unable to recover it. 00:28:13.420 [2024-07-24 22:15:52.442372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.420 [2024-07-24 22:15:52.442412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.420 qpair failed and we were unable to recover it. 00:28:13.421 [2024-07-24 22:15:52.442743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.421 [2024-07-24 22:15:52.442784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.421 qpair failed and we were unable to recover it. 00:28:13.421 [2024-07-24 22:15:52.443169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.421 [2024-07-24 22:15:52.443209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.421 qpair failed and we were unable to recover it. 00:28:13.421 [2024-07-24 22:15:52.443595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.421 [2024-07-24 22:15:52.443635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.421 qpair failed and we were unable to recover it. 00:28:13.421 [2024-07-24 22:15:52.444041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.421 [2024-07-24 22:15:52.444083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.421 qpair failed and we were unable to recover it. 00:28:13.421 [2024-07-24 22:15:52.444374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.421 [2024-07-24 22:15:52.444414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.421 qpair failed and we were unable to recover it. 00:28:13.421 [2024-07-24 22:15:52.444804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.421 [2024-07-24 22:15:52.444845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.421 qpair failed and we were unable to recover it. 00:28:13.421 [2024-07-24 22:15:52.445129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.421 [2024-07-24 22:15:52.445169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.421 qpair failed and we were unable to recover it. 00:28:13.421 [2024-07-24 22:15:52.445555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.421 [2024-07-24 22:15:52.445595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.421 qpair failed and we were unable to recover it. 00:28:13.421 [2024-07-24 22:15:52.446020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.421 [2024-07-24 22:15:52.446062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.421 qpair failed and we were unable to recover it. 00:28:13.421 [2024-07-24 22:15:52.446470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.421 [2024-07-24 22:15:52.446510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.421 qpair failed and we were unable to recover it. 00:28:13.421 [2024-07-24 22:15:52.446892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.421 [2024-07-24 22:15:52.446932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.421 qpair failed and we were unable to recover it. 00:28:13.421 [2024-07-24 22:15:52.447247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.421 [2024-07-24 22:15:52.447287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.421 qpair failed and we were unable to recover it. 00:28:13.421 [2024-07-24 22:15:52.447656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.421 [2024-07-24 22:15:52.447697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.421 qpair failed and we were unable to recover it. 00:28:13.421 [2024-07-24 22:15:52.448086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.421 [2024-07-24 22:15:52.448127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.421 qpair failed and we were unable to recover it. 00:28:13.421 [2024-07-24 22:15:52.448438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.421 [2024-07-24 22:15:52.448478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.421 qpair failed and we were unable to recover it. 00:28:13.421 [2024-07-24 22:15:52.448780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.421 [2024-07-24 22:15:52.448822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.421 qpair failed and we were unable to recover it. 00:28:13.421 [2024-07-24 22:15:52.449209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.421 [2024-07-24 22:15:52.449249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.421 qpair failed and we were unable to recover it. 00:28:13.421 [2024-07-24 22:15:52.449641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.421 [2024-07-24 22:15:52.449681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.421 qpair failed and we were unable to recover it. 00:28:13.421 [2024-07-24 22:15:52.450054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.421 [2024-07-24 22:15:52.450095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.421 qpair failed and we were unable to recover it. 00:28:13.421 [2024-07-24 22:15:52.450505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.421 [2024-07-24 22:15:52.450545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.421 qpair failed and we were unable to recover it. 00:28:13.421 [2024-07-24 22:15:52.450931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.421 [2024-07-24 22:15:52.450973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.421 qpair failed and we were unable to recover it. 00:28:13.421 [2024-07-24 22:15:52.451287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.421 [2024-07-24 22:15:52.451327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.421 qpair failed and we were unable to recover it. 00:28:13.421 [2024-07-24 22:15:52.451656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.421 [2024-07-24 22:15:52.451696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.421 qpair failed and we were unable to recover it. 00:28:13.421 [2024-07-24 22:15:52.452098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.421 [2024-07-24 22:15:52.452139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.421 qpair failed and we were unable to recover it. 00:28:13.421 [2024-07-24 22:15:52.452456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.421 [2024-07-24 22:15:52.452496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.421 qpair failed and we were unable to recover it. 00:28:13.421 [2024-07-24 22:15:52.452804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.421 [2024-07-24 22:15:52.452817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.421 qpair failed and we were unable to recover it. 00:28:13.421 [2024-07-24 22:15:52.453170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.421 [2024-07-24 22:15:52.453210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.421 qpair failed and we were unable to recover it. 00:28:13.421 [2024-07-24 22:15:52.453630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.421 [2024-07-24 22:15:52.453670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.421 qpair failed and we were unable to recover it. 00:28:13.421 [2024-07-24 22:15:52.453982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.421 [2024-07-24 22:15:52.453995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.421 qpair failed and we were unable to recover it. 00:28:13.421 [2024-07-24 22:15:52.454253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.421 [2024-07-24 22:15:52.454265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.421 qpair failed and we were unable to recover it. 00:28:13.421 [2024-07-24 22:15:52.454613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.421 [2024-07-24 22:15:52.454653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.421 qpair failed and we were unable to recover it. 00:28:13.421 [2024-07-24 22:15:52.455025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.421 [2024-07-24 22:15:52.455066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.421 qpair failed and we were unable to recover it. 00:28:13.421 [2024-07-24 22:15:52.455383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.421 [2024-07-24 22:15:52.455424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.421 qpair failed and we were unable to recover it. 00:28:13.421 [2024-07-24 22:15:52.455788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.421 [2024-07-24 22:15:52.455829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.421 qpair failed and we were unable to recover it. 00:28:13.421 [2024-07-24 22:15:52.456155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.421 [2024-07-24 22:15:52.456202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.421 qpair failed and we were unable to recover it. 00:28:13.421 [2024-07-24 22:15:52.456529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.421 [2024-07-24 22:15:52.456541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.421 qpair failed and we were unable to recover it. 00:28:13.421 [2024-07-24 22:15:52.456863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.421 [2024-07-24 22:15:52.456904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.422 qpair failed and we were unable to recover it. 00:28:13.422 [2024-07-24 22:15:52.457295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.422 [2024-07-24 22:15:52.457335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.422 qpair failed and we were unable to recover it. 00:28:13.422 [2024-07-24 22:15:52.457728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.422 [2024-07-24 22:15:52.457768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.422 qpair failed and we were unable to recover it. 00:28:13.422 [2024-07-24 22:15:52.458083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.422 [2024-07-24 22:15:52.458123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.422 qpair failed and we were unable to recover it. 00:28:13.422 [2024-07-24 22:15:52.458513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.422 [2024-07-24 22:15:52.458554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.422 qpair failed and we were unable to recover it. 00:28:13.422 [2024-07-24 22:15:52.458929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.422 [2024-07-24 22:15:52.458970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.422 qpair failed and we were unable to recover it. 00:28:13.422 [2024-07-24 22:15:52.459358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.422 [2024-07-24 22:15:52.459398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.422 qpair failed and we were unable to recover it. 00:28:13.422 [2024-07-24 22:15:52.459729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.422 [2024-07-24 22:15:52.459741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.422 qpair failed and we were unable to recover it. 00:28:13.422 [2024-07-24 22:15:52.459931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.422 [2024-07-24 22:15:52.459943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.422 qpair failed and we were unable to recover it. 00:28:13.422 [2024-07-24 22:15:52.460286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.422 [2024-07-24 22:15:52.460326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.422 qpair failed and we were unable to recover it. 00:28:13.422 [2024-07-24 22:15:52.460711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.422 [2024-07-24 22:15:52.460764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.422 qpair failed and we were unable to recover it. 00:28:13.422 [2024-07-24 22:15:52.461166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.422 [2024-07-24 22:15:52.461207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.422 qpair failed and we were unable to recover it. 00:28:13.422 [2024-07-24 22:15:52.461554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.422 [2024-07-24 22:15:52.461594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.422 qpair failed and we were unable to recover it. 00:28:13.422 [2024-07-24 22:15:52.461921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.422 [2024-07-24 22:15:52.461962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.422 qpair failed and we were unable to recover it. 00:28:13.422 [2024-07-24 22:15:52.462346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.422 [2024-07-24 22:15:52.462386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.422 qpair failed and we were unable to recover it. 00:28:13.422 [2024-07-24 22:15:52.462749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.422 [2024-07-24 22:15:52.462790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.422 qpair failed and we were unable to recover it. 00:28:13.422 [2024-07-24 22:15:52.463107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.422 [2024-07-24 22:15:52.463147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.422 qpair failed and we were unable to recover it. 00:28:13.422 [2024-07-24 22:15:52.463474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.422 [2024-07-24 22:15:52.463514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.422 qpair failed and we were unable to recover it. 00:28:13.422 [2024-07-24 22:15:52.463867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.422 [2024-07-24 22:15:52.463908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.422 qpair failed and we were unable to recover it. 00:28:13.422 [2024-07-24 22:15:52.464306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.422 [2024-07-24 22:15:52.464346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.422 qpair failed and we were unable to recover it. 00:28:13.422 [2024-07-24 22:15:52.464699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.422 [2024-07-24 22:15:52.464711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.422 qpair failed and we were unable to recover it. 00:28:13.422 [2024-07-24 22:15:52.465034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.422 [2024-07-24 22:15:52.465074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.422 qpair failed and we were unable to recover it. 00:28:13.422 [2024-07-24 22:15:52.465457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.422 [2024-07-24 22:15:52.465497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.422 qpair failed and we were unable to recover it. 00:28:13.422 [2024-07-24 22:15:52.465844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.422 [2024-07-24 22:15:52.465886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.422 qpair failed and we were unable to recover it. 00:28:13.422 [2024-07-24 22:15:52.466288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.422 [2024-07-24 22:15:52.466328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.422 qpair failed and we were unable to recover it. 00:28:13.422 [2024-07-24 22:15:52.466730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.422 [2024-07-24 22:15:52.466773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.422 qpair failed and we were unable to recover it. 00:28:13.422 [2024-07-24 22:15:52.467163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.422 [2024-07-24 22:15:52.467204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.422 qpair failed and we were unable to recover it. 00:28:13.422 [2024-07-24 22:15:52.467595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.422 [2024-07-24 22:15:52.467636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.422 qpair failed and we were unable to recover it. 00:28:13.422 [2024-07-24 22:15:52.468032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.422 [2024-07-24 22:15:52.468074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.422 qpair failed and we were unable to recover it. 00:28:13.422 [2024-07-24 22:15:52.468459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.422 [2024-07-24 22:15:52.468498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.422 qpair failed and we were unable to recover it. 00:28:13.422 [2024-07-24 22:15:52.468885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.422 [2024-07-24 22:15:52.468926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.422 qpair failed and we were unable to recover it. 00:28:13.422 [2024-07-24 22:15:52.469312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.422 [2024-07-24 22:15:52.469352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.422 qpair failed and we were unable to recover it. 00:28:13.422 [2024-07-24 22:15:52.469656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.422 [2024-07-24 22:15:52.469668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.422 qpair failed and we were unable to recover it. 00:28:13.422 [2024-07-24 22:15:52.470022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.422 [2024-07-24 22:15:52.470063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.422 qpair failed and we were unable to recover it. 00:28:13.422 [2024-07-24 22:15:52.470401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.422 [2024-07-24 22:15:52.470440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.422 qpair failed and we were unable to recover it. 00:28:13.422 [2024-07-24 22:15:52.470798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.422 [2024-07-24 22:15:52.470835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.422 qpair failed and we were unable to recover it. 00:28:13.422 [2024-07-24 22:15:52.471155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.422 [2024-07-24 22:15:52.471196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.422 qpair failed and we were unable to recover it. 00:28:13.422 [2024-07-24 22:15:52.471498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.422 [2024-07-24 22:15:52.471538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.422 qpair failed and we were unable to recover it. 00:28:13.422 [2024-07-24 22:15:52.471899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.423 [2024-07-24 22:15:52.471945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.423 qpair failed and we were unable to recover it. 00:28:13.423 [2024-07-24 22:15:52.472332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.423 [2024-07-24 22:15:52.472372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.423 qpair failed and we were unable to recover it. 00:28:13.423 [2024-07-24 22:15:52.472686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.423 [2024-07-24 22:15:52.472736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.423 qpair failed and we were unable to recover it. 00:28:13.423 [2024-07-24 22:15:52.473125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.423 [2024-07-24 22:15:52.473165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.423 qpair failed and we were unable to recover it. 00:28:13.423 [2024-07-24 22:15:52.473545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.423 [2024-07-24 22:15:52.473586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.423 qpair failed and we were unable to recover it. 00:28:13.423 [2024-07-24 22:15:52.473970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.423 [2024-07-24 22:15:52.474011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.423 qpair failed and we were unable to recover it. 00:28:13.423 [2024-07-24 22:15:52.474400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.423 [2024-07-24 22:15:52.474441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.423 qpair failed and we were unable to recover it. 00:28:13.423 [2024-07-24 22:15:52.474802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.423 [2024-07-24 22:15:52.474843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.423 qpair failed and we were unable to recover it. 00:28:13.423 [2024-07-24 22:15:52.475141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.423 [2024-07-24 22:15:52.475188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.423 qpair failed and we were unable to recover it. 00:28:13.423 [2024-07-24 22:15:52.475520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.423 [2024-07-24 22:15:52.475560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.423 qpair failed and we were unable to recover it. 00:28:13.423 [2024-07-24 22:15:52.475950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.423 [2024-07-24 22:15:52.475991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.423 qpair failed and we were unable to recover it. 00:28:13.423 [2024-07-24 22:15:52.476378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.423 [2024-07-24 22:15:52.476418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.423 qpair failed and we were unable to recover it. 00:28:13.423 [2024-07-24 22:15:52.476796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.423 [2024-07-24 22:15:52.476808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.423 qpair failed and we were unable to recover it. 00:28:13.423 [2024-07-24 22:15:52.477128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.423 [2024-07-24 22:15:52.477140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.423 qpair failed and we were unable to recover it. 00:28:13.423 [2024-07-24 22:15:52.477416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.423 [2024-07-24 22:15:52.477457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.423 qpair failed and we were unable to recover it. 00:28:13.423 [2024-07-24 22:15:52.477820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.423 [2024-07-24 22:15:52.477861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.423 qpair failed and we were unable to recover it. 00:28:13.423 [2024-07-24 22:15:52.478224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.423 [2024-07-24 22:15:52.478265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.423 qpair failed and we were unable to recover it. 00:28:13.423 [2024-07-24 22:15:52.478641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.423 [2024-07-24 22:15:52.478681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.423 qpair failed and we were unable to recover it. 00:28:13.423 [2024-07-24 22:15:52.478915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.423 [2024-07-24 22:15:52.478956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.423 qpair failed and we were unable to recover it. 00:28:13.423 [2024-07-24 22:15:52.479364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.423 [2024-07-24 22:15:52.479403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.423 qpair failed and we were unable to recover it. 00:28:13.423 [2024-07-24 22:15:52.479735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.423 [2024-07-24 22:15:52.479775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.423 qpair failed and we were unable to recover it. 00:28:13.423 [2024-07-24 22:15:52.480162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.423 [2024-07-24 22:15:52.480202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.423 qpair failed and we were unable to recover it. 00:28:13.423 [2024-07-24 22:15:52.480591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.423 [2024-07-24 22:15:52.480631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.423 qpair failed and we were unable to recover it. 00:28:13.423 [2024-07-24 22:15:52.480919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.423 [2024-07-24 22:15:52.480960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.423 qpair failed and we were unable to recover it. 00:28:13.423 [2024-07-24 22:15:52.481322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.423 [2024-07-24 22:15:52.481363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.423 qpair failed and we were unable to recover it. 00:28:13.423 [2024-07-24 22:15:52.481676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.423 [2024-07-24 22:15:52.481726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.423 qpair failed and we were unable to recover it. 00:28:13.423 [2024-07-24 22:15:52.482118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.423 [2024-07-24 22:15:52.482159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.423 qpair failed and we were unable to recover it. 00:28:13.423 [2024-07-24 22:15:52.482554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.423 [2024-07-24 22:15:52.482595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.423 qpair failed and we were unable to recover it. 00:28:13.423 [2024-07-24 22:15:52.482980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.423 [2024-07-24 22:15:52.483021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.423 qpair failed and we were unable to recover it. 00:28:13.423 [2024-07-24 22:15:52.483336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.423 [2024-07-24 22:15:52.483377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.423 qpair failed and we were unable to recover it. 00:28:13.423 [2024-07-24 22:15:52.483699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.423 [2024-07-24 22:15:52.483750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.423 qpair failed and we were unable to recover it. 00:28:13.423 [2024-07-24 22:15:52.484139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.423 [2024-07-24 22:15:52.484179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.423 qpair failed and we were unable to recover it. 00:28:13.423 [2024-07-24 22:15:52.484561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.423 [2024-07-24 22:15:52.484600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.423 qpair failed and we were unable to recover it. 00:28:13.423 [2024-07-24 22:15:52.484908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.423 [2024-07-24 22:15:52.484922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.423 qpair failed and we were unable to recover it. 00:28:13.423 [2024-07-24 22:15:52.485275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.423 [2024-07-24 22:15:52.485316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.423 qpair failed and we were unable to recover it. 00:28:13.423 [2024-07-24 22:15:52.485699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.423 [2024-07-24 22:15:52.485763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.423 qpair failed and we were unable to recover it. 00:28:13.423 [2024-07-24 22:15:52.486150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.423 [2024-07-24 22:15:52.486190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.423 qpair failed and we were unable to recover it. 00:28:13.423 [2024-07-24 22:15:52.486499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.424 [2024-07-24 22:15:52.486512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.424 qpair failed and we were unable to recover it. 00:28:13.424 [2024-07-24 22:15:52.486832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.424 [2024-07-24 22:15:52.486874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.424 qpair failed and we were unable to recover it. 00:28:13.424 [2024-07-24 22:15:52.487261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.424 [2024-07-24 22:15:52.487301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.424 qpair failed and we were unable to recover it. 00:28:13.424 [2024-07-24 22:15:52.487586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.424 [2024-07-24 22:15:52.487601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.424 qpair failed and we were unable to recover it. 00:28:13.424 [2024-07-24 22:15:52.487891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.424 [2024-07-24 22:15:52.487932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.424 qpair failed and we were unable to recover it. 00:28:13.424 [2024-07-24 22:15:52.488296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.424 [2024-07-24 22:15:52.488337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.424 qpair failed and we were unable to recover it. 00:28:13.424 [2024-07-24 22:15:52.488735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.424 [2024-07-24 22:15:52.488777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.424 qpair failed and we were unable to recover it. 00:28:13.424 [2024-07-24 22:15:52.489162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.424 [2024-07-24 22:15:52.489203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.424 qpair failed and we were unable to recover it. 00:28:13.424 [2024-07-24 22:15:52.489590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.424 [2024-07-24 22:15:52.489630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.424 qpair failed and we were unable to recover it. 00:28:13.424 [2024-07-24 22:15:52.490027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.424 [2024-07-24 22:15:52.490068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.424 qpair failed and we were unable to recover it. 00:28:13.424 [2024-07-24 22:15:52.490431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.424 [2024-07-24 22:15:52.490472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.424 qpair failed and we were unable to recover it. 00:28:13.424 [2024-07-24 22:15:52.490794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.424 [2024-07-24 22:15:52.490806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.424 qpair failed and we were unable to recover it. 00:28:13.424 [2024-07-24 22:15:52.491133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.424 [2024-07-24 22:15:52.491145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.424 qpair failed and we were unable to recover it. 00:28:13.424 [2024-07-24 22:15:52.491403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.424 [2024-07-24 22:15:52.491444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.424 qpair failed and we were unable to recover it. 00:28:13.424 [2024-07-24 22:15:52.491779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.424 [2024-07-24 22:15:52.491821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.424 qpair failed and we were unable to recover it. 00:28:13.424 [2024-07-24 22:15:52.492146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.424 [2024-07-24 22:15:52.492186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.424 qpair failed and we were unable to recover it. 00:28:13.424 [2024-07-24 22:15:52.492497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.424 [2024-07-24 22:15:52.492537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.424 qpair failed and we were unable to recover it. 00:28:13.424 [2024-07-24 22:15:52.492915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.424 [2024-07-24 22:15:52.492927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.424 qpair failed and we were unable to recover it. 00:28:13.424 [2024-07-24 22:15:52.493234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.424 [2024-07-24 22:15:52.493246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.424 qpair failed and we were unable to recover it. 00:28:13.424 [2024-07-24 22:15:52.493589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.424 [2024-07-24 22:15:52.493629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.424 qpair failed and we were unable to recover it. 00:28:13.424 [2024-07-24 22:15:52.494020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.424 [2024-07-24 22:15:52.494062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.424 qpair failed and we were unable to recover it. 00:28:13.424 [2024-07-24 22:15:52.494375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.424 [2024-07-24 22:15:52.494416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.424 qpair failed and we were unable to recover it. 00:28:13.424 [2024-07-24 22:15:52.494811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.424 [2024-07-24 22:15:52.494852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.424 qpair failed and we were unable to recover it. 00:28:13.424 [2024-07-24 22:15:52.495237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.424 [2024-07-24 22:15:52.495277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.424 qpair failed and we were unable to recover it. 00:28:13.424 [2024-07-24 22:15:52.495562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.424 [2024-07-24 22:15:52.495574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.424 qpair failed and we were unable to recover it. 00:28:13.424 [2024-07-24 22:15:52.495908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.424 [2024-07-24 22:15:52.495949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.424 qpair failed and we were unable to recover it. 00:28:13.424 [2024-07-24 22:15:52.496261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.424 [2024-07-24 22:15:52.496302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.424 qpair failed and we were unable to recover it. 00:28:13.424 [2024-07-24 22:15:52.496664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.424 [2024-07-24 22:15:52.496704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.424 qpair failed and we were unable to recover it. 00:28:13.424 [2024-07-24 22:15:52.497026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.424 [2024-07-24 22:15:52.497066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.424 qpair failed and we were unable to recover it. 00:28:13.424 [2024-07-24 22:15:52.497386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.424 [2024-07-24 22:15:52.497426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.424 qpair failed and we were unable to recover it. 00:28:13.424 [2024-07-24 22:15:52.497818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.424 [2024-07-24 22:15:52.497859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.424 qpair failed and we were unable to recover it. 00:28:13.424 [2024-07-24 22:15:52.498182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.424 [2024-07-24 22:15:52.498222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.424 qpair failed and we were unable to recover it. 00:28:13.424 [2024-07-24 22:15:52.498582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.424 [2024-07-24 22:15:52.498622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.424 qpair failed and we were unable to recover it. 00:28:13.424 [2024-07-24 22:15:52.499048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.424 [2024-07-24 22:15:52.499089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.424 qpair failed and we were unable to recover it. 00:28:13.424 [2024-07-24 22:15:52.499462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.425 [2024-07-24 22:15:52.499502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.425 qpair failed and we were unable to recover it. 00:28:13.425 [2024-07-24 22:15:52.499887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.425 [2024-07-24 22:15:52.499928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.425 qpair failed and we were unable to recover it. 00:28:13.425 [2024-07-24 22:15:52.500312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.425 [2024-07-24 22:15:52.500352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.425 qpair failed and we were unable to recover it. 00:28:13.425 [2024-07-24 22:15:52.500744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.425 [2024-07-24 22:15:52.500786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.425 qpair failed and we were unable to recover it. 00:28:13.425 [2024-07-24 22:15:52.501149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.425 [2024-07-24 22:15:52.501190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.425 qpair failed and we were unable to recover it. 00:28:13.425 [2024-07-24 22:15:52.501586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.425 [2024-07-24 22:15:52.501626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.425 qpair failed and we were unable to recover it. 00:28:13.425 [2024-07-24 22:15:52.502006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.425 [2024-07-24 22:15:52.502047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.425 qpair failed and we were unable to recover it. 00:28:13.425 [2024-07-24 22:15:52.502431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.425 [2024-07-24 22:15:52.502472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.425 qpair failed and we were unable to recover it. 00:28:13.425 [2024-07-24 22:15:52.502856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.425 [2024-07-24 22:15:52.502897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.425 qpair failed and we were unable to recover it. 00:28:13.425 [2024-07-24 22:15:52.503264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.425 [2024-07-24 22:15:52.503310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.425 qpair failed and we were unable to recover it. 00:28:13.425 [2024-07-24 22:15:52.503695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.425 [2024-07-24 22:15:52.503744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.425 qpair failed and we were unable to recover it. 00:28:13.425 [2024-07-24 22:15:52.503993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.425 [2024-07-24 22:15:52.504005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.425 qpair failed and we were unable to recover it. 00:28:13.425 [2024-07-24 22:15:52.504343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.425 [2024-07-24 22:15:52.504382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.425 qpair failed and we were unable to recover it. 00:28:13.425 [2024-07-24 22:15:52.504685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.425 [2024-07-24 22:15:52.504736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.425 qpair failed and we were unable to recover it. 00:28:13.425 [2024-07-24 22:15:52.505067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.425 [2024-07-24 22:15:52.505107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.425 qpair failed and we were unable to recover it. 00:28:13.425 [2024-07-24 22:15:52.505497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.425 [2024-07-24 22:15:52.505538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.425 qpair failed and we were unable to recover it. 00:28:13.425 [2024-07-24 22:15:52.505849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.425 [2024-07-24 22:15:52.505862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.425 qpair failed and we were unable to recover it. 00:28:13.425 [2024-07-24 22:15:52.506214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.425 [2024-07-24 22:15:52.506254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.425 qpair failed and we were unable to recover it. 00:28:13.425 [2024-07-24 22:15:52.506557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.425 [2024-07-24 22:15:52.506597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.425 qpair failed and we were unable to recover it. 00:28:13.425 [2024-07-24 22:15:52.506988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.425 [2024-07-24 22:15:52.507001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.425 qpair failed and we were unable to recover it. 00:28:13.425 [2024-07-24 22:15:52.507321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.425 [2024-07-24 22:15:52.507334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.425 qpair failed and we were unable to recover it. 00:28:13.425 [2024-07-24 22:15:52.507705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.425 [2024-07-24 22:15:52.507767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.425 qpair failed and we were unable to recover it. 00:28:13.425 [2024-07-24 22:15:52.508064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.425 [2024-07-24 22:15:52.508103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.425 qpair failed and we were unable to recover it. 00:28:13.425 [2024-07-24 22:15:52.508494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.425 [2024-07-24 22:15:52.508534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.425 qpair failed and we were unable to recover it. 00:28:13.425 [2024-07-24 22:15:52.508830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.425 [2024-07-24 22:15:52.508842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.425 qpair failed and we were unable to recover it. 00:28:13.425 [2024-07-24 22:15:52.509111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.425 [2024-07-24 22:15:52.509123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.425 qpair failed and we were unable to recover it. 00:28:13.425 [2024-07-24 22:15:52.509387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.425 [2024-07-24 22:15:52.509399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.425 qpair failed and we were unable to recover it. 00:28:13.425 [2024-07-24 22:15:52.509676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.425 [2024-07-24 22:15:52.509688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.425 qpair failed and we were unable to recover it. 00:28:13.425 [2024-07-24 22:15:52.510105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.425 [2024-07-24 22:15:52.510145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.425 qpair failed and we were unable to recover it. 00:28:13.425 [2024-07-24 22:15:52.510532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.425 [2024-07-24 22:15:52.510572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.425 qpair failed and we were unable to recover it. 00:28:13.425 [2024-07-24 22:15:52.510931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.425 [2024-07-24 22:15:52.510945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.425 qpair failed and we were unable to recover it. 00:28:13.425 [2024-07-24 22:15:52.511196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.425 [2024-07-24 22:15:52.511209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.425 qpair failed and we were unable to recover it. 00:28:13.425 [2024-07-24 22:15:52.511442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.425 [2024-07-24 22:15:52.511455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.425 qpair failed and we were unable to recover it. 00:28:13.425 [2024-07-24 22:15:52.511721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.425 [2024-07-24 22:15:52.511758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.425 qpair failed and we were unable to recover it. 00:28:13.425 [2024-07-24 22:15:52.512168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.425 [2024-07-24 22:15:52.512209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.425 qpair failed and we were unable to recover it. 00:28:13.426 [2024-07-24 22:15:52.512500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.426 [2024-07-24 22:15:52.512540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.426 qpair failed and we were unable to recover it. 00:28:13.426 [2024-07-24 22:15:52.512922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.426 [2024-07-24 22:15:52.512934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.426 qpair failed and we were unable to recover it. 00:28:13.426 [2024-07-24 22:15:52.513287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.426 [2024-07-24 22:15:52.513328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.426 qpair failed and we were unable to recover it. 00:28:13.426 [2024-07-24 22:15:52.513685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.426 [2024-07-24 22:15:52.513838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.426 qpair failed and we were unable to recover it. 00:28:13.426 [2024-07-24 22:15:52.514170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.426 [2024-07-24 22:15:52.514182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.426 qpair failed and we were unable to recover it. 00:28:13.426 [2024-07-24 22:15:52.514517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.426 [2024-07-24 22:15:52.514557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.426 qpair failed and we were unable to recover it. 00:28:13.426 [2024-07-24 22:15:52.514950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.426 [2024-07-24 22:15:52.514991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.426 qpair failed and we were unable to recover it. 00:28:13.426 [2024-07-24 22:15:52.515282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.426 [2024-07-24 22:15:52.515322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.426 qpair failed and we were unable to recover it. 00:28:13.426 [2024-07-24 22:15:52.515611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.426 [2024-07-24 22:15:52.515652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.426 qpair failed and we were unable to recover it. 00:28:13.426 [2024-07-24 22:15:52.516060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.426 [2024-07-24 22:15:52.516101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.426 qpair failed and we were unable to recover it. 00:28:13.426 [2024-07-24 22:15:52.516462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.426 [2024-07-24 22:15:52.516502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.426 qpair failed and we were unable to recover it. 00:28:13.426 [2024-07-24 22:15:52.516725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.426 [2024-07-24 22:15:52.516737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.426 qpair failed and we were unable to recover it. 00:28:13.426 [2024-07-24 22:15:52.517068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.426 [2024-07-24 22:15:52.517109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.426 qpair failed and we were unable to recover it. 00:28:13.426 [2024-07-24 22:15:52.517492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.426 [2024-07-24 22:15:52.517532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.426 qpair failed and we were unable to recover it. 00:28:13.426 [2024-07-24 22:15:52.517844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.426 [2024-07-24 22:15:52.517891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.426 qpair failed and we were unable to recover it. 00:28:13.426 [2024-07-24 22:15:52.518280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.426 [2024-07-24 22:15:52.518320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.426 qpair failed and we were unable to recover it. 00:28:13.426 [2024-07-24 22:15:52.518711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.426 [2024-07-24 22:15:52.518763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.426 qpair failed and we were unable to recover it. 00:28:13.426 [2024-07-24 22:15:52.519125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.426 [2024-07-24 22:15:52.519166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.426 qpair failed and we were unable to recover it. 00:28:13.426 [2024-07-24 22:15:52.519553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.426 [2024-07-24 22:15:52.519592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.426 qpair failed and we were unable to recover it. 00:28:13.426 [2024-07-24 22:15:52.519977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.426 [2024-07-24 22:15:52.520018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.426 qpair failed and we were unable to recover it. 00:28:13.426 [2024-07-24 22:15:52.520383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.426 [2024-07-24 22:15:52.520423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.426 qpair failed and we were unable to recover it. 00:28:13.426 [2024-07-24 22:15:52.520811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.426 [2024-07-24 22:15:52.520852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.426 qpair failed and we were unable to recover it. 00:28:13.426 [2024-07-24 22:15:52.521238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.426 [2024-07-24 22:15:52.521277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.426 qpair failed and we were unable to recover it. 00:28:13.426 [2024-07-24 22:15:52.521662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.426 [2024-07-24 22:15:52.521703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.426 qpair failed and we were unable to recover it. 00:28:13.426 [2024-07-24 22:15:52.522104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.426 [2024-07-24 22:15:52.522146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.426 qpair failed and we were unable to recover it. 00:28:13.426 [2024-07-24 22:15:52.522450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.426 [2024-07-24 22:15:52.522491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.426 qpair failed and we were unable to recover it. 00:28:13.426 [2024-07-24 22:15:52.522893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.426 [2024-07-24 22:15:52.522934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.426 qpair failed and we were unable to recover it. 00:28:13.426 [2024-07-24 22:15:52.523300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.426 [2024-07-24 22:15:52.523339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.426 qpair failed and we were unable to recover it. 00:28:13.426 [2024-07-24 22:15:52.523658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.426 [2024-07-24 22:15:52.523670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.426 qpair failed and we were unable to recover it. 00:28:13.426 [2024-07-24 22:15:52.523906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.426 [2024-07-24 22:15:52.523918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.426 qpair failed and we were unable to recover it. 00:28:13.426 [2024-07-24 22:15:52.524253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.426 [2024-07-24 22:15:52.524293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.426 qpair failed and we were unable to recover it. 00:28:13.426 [2024-07-24 22:15:52.524684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.426 [2024-07-24 22:15:52.524733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.426 qpair failed and we were unable to recover it. 00:28:13.426 [2024-07-24 22:15:52.525106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.426 [2024-07-24 22:15:52.525147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.426 qpair failed and we were unable to recover it. 00:28:13.426 [2024-07-24 22:15:52.525462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.426 [2024-07-24 22:15:52.525502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.426 qpair failed and we were unable to recover it. 00:28:13.426 [2024-07-24 22:15:52.525858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.426 [2024-07-24 22:15:52.525871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.426 qpair failed and we were unable to recover it. 00:28:13.426 [2024-07-24 22:15:52.526173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.426 [2024-07-24 22:15:52.526189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.426 qpair failed and we were unable to recover it. 00:28:13.426 [2024-07-24 22:15:52.526519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.426 [2024-07-24 22:15:52.526560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.426 qpair failed and we were unable to recover it. 00:28:13.427 [2024-07-24 22:15:52.526934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-07-24 22:15:52.526975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.427 qpair failed and we were unable to recover it. 00:28:13.427 [2024-07-24 22:15:52.527265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-07-24 22:15:52.527304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.427 qpair failed and we were unable to recover it. 00:28:13.427 [2024-07-24 22:15:52.527691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-07-24 22:15:52.527760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.427 qpair failed and we were unable to recover it. 00:28:13.427 [2024-07-24 22:15:52.528153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-07-24 22:15:52.528193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.427 qpair failed and we were unable to recover it. 00:28:13.427 [2024-07-24 22:15:52.528584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-07-24 22:15:52.528625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.427 qpair failed and we were unable to recover it. 00:28:13.427 [2024-07-24 22:15:52.529014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-07-24 22:15:52.529055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.427 qpair failed and we were unable to recover it. 00:28:13.427 [2024-07-24 22:15:52.529439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-07-24 22:15:52.529478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.427 qpair failed and we were unable to recover it. 00:28:13.427 [2024-07-24 22:15:52.529849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-07-24 22:15:52.529861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.427 qpair failed and we were unable to recover it. 00:28:13.427 [2024-07-24 22:15:52.530184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-07-24 22:15:52.530196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.427 qpair failed and we were unable to recover it. 00:28:13.427 [2024-07-24 22:15:52.530481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-07-24 22:15:52.530521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.427 qpair failed and we were unable to recover it. 00:28:13.427 [2024-07-24 22:15:52.530932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-07-24 22:15:52.530973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.427 qpair failed and we were unable to recover it. 00:28:13.427 [2024-07-24 22:15:52.531339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-07-24 22:15:52.531379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.427 qpair failed and we were unable to recover it. 00:28:13.427 [2024-07-24 22:15:52.531772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-07-24 22:15:52.531813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.427 qpair failed and we were unable to recover it. 00:28:13.427 [2024-07-24 22:15:52.532173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-07-24 22:15:52.532214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.427 qpair failed and we were unable to recover it. 00:28:13.427 [2024-07-24 22:15:52.532522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-07-24 22:15:52.532562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.427 qpair failed and we were unable to recover it. 00:28:13.427 [2024-07-24 22:15:52.532795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-07-24 22:15:52.532836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.427 qpair failed and we were unable to recover it. 00:28:13.427 [2024-07-24 22:15:52.533234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-07-24 22:15:52.533275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.427 qpair failed and we were unable to recover it. 00:28:13.427 [2024-07-24 22:15:52.533638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-07-24 22:15:52.533683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.427 qpair failed and we were unable to recover it. 00:28:13.427 [2024-07-24 22:15:52.533979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-07-24 22:15:52.534012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.427 qpair failed and we were unable to recover it. 00:28:13.427 [2024-07-24 22:15:52.534402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-07-24 22:15:52.534443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.427 qpair failed and we were unable to recover it. 00:28:13.427 [2024-07-24 22:15:52.534739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-07-24 22:15:52.534751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.427 qpair failed and we were unable to recover it. 00:28:13.427 [2024-07-24 22:15:52.535011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-07-24 22:15:52.535024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.427 qpair failed and we were unable to recover it. 00:28:13.427 [2024-07-24 22:15:52.535346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-07-24 22:15:52.535386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.427 qpair failed and we were unable to recover it. 00:28:13.427 [2024-07-24 22:15:52.535589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-07-24 22:15:52.535602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.427 qpair failed and we were unable to recover it. 00:28:13.427 [2024-07-24 22:15:52.535946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-07-24 22:15:52.535988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.427 qpair failed and we were unable to recover it. 00:28:13.427 [2024-07-24 22:15:52.536351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-07-24 22:15:52.536391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.427 qpair failed and we were unable to recover it. 00:28:13.427 [2024-07-24 22:15:52.536748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-07-24 22:15:52.536786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.427 qpair failed and we were unable to recover it. 00:28:13.427 [2024-07-24 22:15:52.537189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-07-24 22:15:52.537230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.427 qpair failed and we were unable to recover it. 00:28:13.427 [2024-07-24 22:15:52.537610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-07-24 22:15:52.537650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.427 qpair failed and we were unable to recover it. 00:28:13.427 [2024-07-24 22:15:52.538012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-07-24 22:15:52.538053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.427 qpair failed and we were unable to recover it. 00:28:13.427 [2024-07-24 22:15:52.538441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-07-24 22:15:52.538480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.427 qpair failed and we were unable to recover it. 00:28:13.427 [2024-07-24 22:15:52.538799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-07-24 22:15:52.538812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.427 qpair failed and we were unable to recover it. 00:28:13.427 [2024-07-24 22:15:52.539062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-07-24 22:15:52.539074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.427 qpair failed and we were unable to recover it. 00:28:13.427 [2024-07-24 22:15:52.539402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-07-24 22:15:52.539414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.427 qpair failed and we were unable to recover it. 00:28:13.427 [2024-07-24 22:15:52.539770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-07-24 22:15:52.539810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.427 qpair failed and we were unable to recover it. 00:28:13.427 [2024-07-24 22:15:52.540202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-07-24 22:15:52.540243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.427 qpair failed and we were unable to recover it. 00:28:13.427 [2024-07-24 22:15:52.540607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-07-24 22:15:52.540646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.427 qpair failed and we were unable to recover it. 00:28:13.427 [2024-07-24 22:15:52.541020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-07-24 22:15:52.541060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-07-24 22:15:52.541446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-07-24 22:15:52.541486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-07-24 22:15:52.541883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-07-24 22:15:52.541925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-07-24 22:15:52.542290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-07-24 22:15:52.542330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-07-24 22:15:52.542726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-07-24 22:15:52.542767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-07-24 22:15:52.543131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-07-24 22:15:52.543171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-07-24 22:15:52.543561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-07-24 22:15:52.543602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-07-24 22:15:52.543932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-07-24 22:15:52.543946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-07-24 22:15:52.544277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-07-24 22:15:52.544317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-07-24 22:15:52.544671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-07-24 22:15:52.544683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-07-24 22:15:52.545013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-07-24 22:15:52.545026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-07-24 22:15:52.545284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-07-24 22:15:52.545324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-07-24 22:15:52.545737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-07-24 22:15:52.545782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-07-24 22:15:52.546115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-07-24 22:15:52.546155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-07-24 22:15:52.546558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-07-24 22:15:52.546599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-07-24 22:15:52.546796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-07-24 22:15:52.546809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-07-24 22:15:52.547038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-07-24 22:15:52.547051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-07-24 22:15:52.547350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-07-24 22:15:52.547362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-07-24 22:15:52.547519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-07-24 22:15:52.547532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-07-24 22:15:52.547836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-07-24 22:15:52.547877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-07-24 22:15:52.548198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-07-24 22:15:52.548244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-07-24 22:15:52.548627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-07-24 22:15:52.548668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-07-24 22:15:52.548974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-07-24 22:15:52.549016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-07-24 22:15:52.549400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-07-24 22:15:52.549440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-07-24 22:15:52.549815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-07-24 22:15:52.549836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-07-24 22:15:52.550167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-07-24 22:15:52.550207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-07-24 22:15:52.550566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-07-24 22:15:52.550606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-07-24 22:15:52.550831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-07-24 22:15:52.550844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-07-24 22:15:52.551162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-07-24 22:15:52.551175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-07-24 22:15:52.551525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-07-24 22:15:52.551565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-07-24 22:15:52.551971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-07-24 22:15:52.552012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-07-24 22:15:52.552397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-07-24 22:15:52.552437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-07-24 22:15:52.552834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-07-24 22:15:52.552874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-07-24 22:15:52.553210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-07-24 22:15:52.553250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-07-24 22:15:52.553570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-07-24 22:15:52.553611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-07-24 22:15:52.554001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-07-24 22:15:52.554042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-07-24 22:15:52.554425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-07-24 22:15:52.554465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-07-24 22:15:52.554773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-07-24 22:15:52.554821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-07-24 22:15:52.555160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-07-24 22:15:52.555201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-07-24 22:15:52.555602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-07-24 22:15:52.555642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-07-24 22:15:52.555985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-07-24 22:15:52.556026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-07-24 22:15:52.556353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-07-24 22:15:52.556393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-07-24 22:15:52.556783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-07-24 22:15:52.556825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-07-24 22:15:52.557089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-07-24 22:15:52.557129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-07-24 22:15:52.557506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-07-24 22:15:52.557545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-07-24 22:15:52.557931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-07-24 22:15:52.557972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-07-24 22:15:52.558286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-07-24 22:15:52.558326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-07-24 22:15:52.558732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-07-24 22:15:52.558773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-07-24 22:15:52.559139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-07-24 22:15:52.559180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-07-24 22:15:52.559554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-07-24 22:15:52.559594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-07-24 22:15:52.559832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-07-24 22:15:52.559874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-07-24 22:15:52.560280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-07-24 22:15:52.560321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-07-24 22:15:52.560704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-07-24 22:15:52.560755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-07-24 22:15:52.561045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-07-24 22:15:52.561084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-07-24 22:15:52.561473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-07-24 22:15:52.561513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-07-24 22:15:52.561894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-07-24 22:15:52.561936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-07-24 22:15:52.562222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-07-24 22:15:52.562262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-07-24 22:15:52.562649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-07-24 22:15:52.562689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-07-24 22:15:52.563022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-07-24 22:15:52.563064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-07-24 22:15:52.563448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-07-24 22:15:52.563487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-07-24 22:15:52.563792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-07-24 22:15:52.563805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-07-24 22:15:52.563967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-07-24 22:15:52.563980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-07-24 22:15:52.564244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-07-24 22:15:52.564284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-07-24 22:15:52.564620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-07-24 22:15:52.564660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-07-24 22:15:52.564959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-07-24 22:15:52.564999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-07-24 22:15:52.565288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-07-24 22:15:52.565327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-07-24 22:15:52.565734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-07-24 22:15:52.565776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-07-24 22:15:52.566064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-07-24 22:15:52.566104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-07-24 22:15:52.566486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-07-24 22:15:52.566525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-07-24 22:15:52.566757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-07-24 22:15:52.566769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-07-24 22:15:52.567117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-07-24 22:15:52.567157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-07-24 22:15:52.567463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-07-24 22:15:52.567503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-07-24 22:15:52.567890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-07-24 22:15:52.567931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-07-24 22:15:52.568316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-07-24 22:15:52.568357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-07-24 22:15:52.568754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-07-24 22:15:52.568796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-07-24 22:15:52.569120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-07-24 22:15:52.569160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-07-24 22:15:52.569469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-07-24 22:15:52.569509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-07-24 22:15:52.569905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-07-24 22:15:52.569946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-07-24 22:15:52.570320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-07-24 22:15:52.570360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-07-24 22:15:52.570746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-07-24 22:15:52.570786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-07-24 22:15:52.571149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-07-24 22:15:52.571189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-07-24 22:15:52.571565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-07-24 22:15:52.571605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-07-24 22:15:52.572001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-07-24 22:15:52.572043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-07-24 22:15:52.572431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-07-24 22:15:52.572470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-07-24 22:15:52.572865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-07-24 22:15:52.572907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-07-24 22:15:52.573289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-07-24 22:15:52.573329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-07-24 22:15:52.573697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-07-24 22:15:52.573748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-07-24 22:15:52.574046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-07-24 22:15:52.574060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-07-24 22:15:52.574303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-07-24 22:15:52.574316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-07-24 22:15:52.574643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-07-24 22:15:52.574691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-07-24 22:15:52.575091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-07-24 22:15:52.575132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-07-24 22:15:52.575513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-07-24 22:15:52.575559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-07-24 22:15:52.575885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-07-24 22:15:52.575926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-07-24 22:15:52.576244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-07-24 22:15:52.576283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-07-24 22:15:52.576621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-07-24 22:15:52.576661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-07-24 22:15:52.577056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-07-24 22:15:52.577097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-07-24 22:15:52.577477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-07-24 22:15:52.577517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-07-24 22:15:52.577875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-07-24 22:15:52.577919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-07-24 22:15:52.578238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-07-24 22:15:52.578278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-07-24 22:15:52.578662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-07-24 22:15:52.578702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-07-24 22:15:52.579016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-07-24 22:15:52.579055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-07-24 22:15:52.579376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-07-24 22:15:52.579416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-07-24 22:15:52.579809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-07-24 22:15:52.579850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-07-24 22:15:52.580086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-07-24 22:15:52.580127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-07-24 22:15:52.580519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-07-24 22:15:52.580558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-07-24 22:15:52.580945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-07-24 22:15:52.580986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-07-24 22:15:52.581380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-07-24 22:15:52.581421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-07-24 22:15:52.581814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-07-24 22:15:52.581855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-07-24 22:15:52.582240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-07-24 22:15:52.582280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.431 [2024-07-24 22:15:52.582636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-07-24 22:15:52.582676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-07-24 22:15:52.583054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-07-24 22:15:52.583095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-07-24 22:15:52.583475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-07-24 22:15:52.583514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-07-24 22:15:52.583899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-07-24 22:15:52.583940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-07-24 22:15:52.584192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-07-24 22:15:52.584205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-07-24 22:15:52.584527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-07-24 22:15:52.584568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-07-24 22:15:52.584876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-07-24 22:15:52.584916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-07-24 22:15:52.585307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-07-24 22:15:52.585347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-07-24 22:15:52.585679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-07-24 22:15:52.585728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-07-24 22:15:52.586029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-07-24 22:15:52.586042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-07-24 22:15:52.586292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-07-24 22:15:52.586305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-07-24 22:15:52.586573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-07-24 22:15:52.586586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-07-24 22:15:52.586946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-07-24 22:15:52.586986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-07-24 22:15:52.587378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-07-24 22:15:52.587419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-07-24 22:15:52.587805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-07-24 22:15:52.587846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-07-24 22:15:52.588226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-07-24 22:15:52.588266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-07-24 22:15:52.588658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-07-24 22:15:52.588698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-07-24 22:15:52.588973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-07-24 22:15:52.588986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-07-24 22:15:52.589312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-07-24 22:15:52.589357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-07-24 22:15:52.589750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-07-24 22:15:52.589791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-07-24 22:15:52.590158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-07-24 22:15:52.590198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-07-24 22:15:52.590589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-07-24 22:15:52.590629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-07-24 22:15:52.591009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-07-24 22:15:52.591021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-07-24 22:15:52.591352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-07-24 22:15:52.591392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-07-24 22:15:52.591771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-07-24 22:15:52.591784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-07-24 22:15:52.592109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-07-24 22:15:52.592122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-07-24 22:15:52.592493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-07-24 22:15:52.592533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-07-24 22:15:52.592870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-07-24 22:15:52.592911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-07-24 22:15:52.593272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-07-24 22:15:52.593313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-07-24 22:15:52.593703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-07-24 22:15:52.593754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-07-24 22:15:52.594134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-07-24 22:15:52.594147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-07-24 22:15:52.594476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-07-24 22:15:52.594516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-07-24 22:15:52.594882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-07-24 22:15:52.594929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-07-24 22:15:52.595315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-07-24 22:15:52.595356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-07-24 22:15:52.595730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-07-24 22:15:52.595771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-07-24 22:15:52.596151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-07-24 22:15:52.596192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-07-24 22:15:52.596570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-07-24 22:15:52.596611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-07-24 22:15:52.596998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-07-24 22:15:52.597039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-07-24 22:15:52.597428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-07-24 22:15:52.597469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-07-24 22:15:52.597757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-07-24 22:15:52.597769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-07-24 22:15:52.598132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-07-24 22:15:52.598172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-07-24 22:15:52.598564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-07-24 22:15:52.598600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-07-24 22:15:52.598923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-07-24 22:15:52.598936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-07-24 22:15:52.599209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-07-24 22:15:52.599221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-07-24 22:15:52.599546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-07-24 22:15:52.599591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-07-24 22:15:52.599939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-07-24 22:15:52.599981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-07-24 22:15:52.600283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-07-24 22:15:52.600322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-07-24 22:15:52.600705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-07-24 22:15:52.600757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-07-24 22:15:52.601140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-07-24 22:15:52.601180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-07-24 22:15:52.601407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-07-24 22:15:52.601448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-07-24 22:15:52.601773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-07-24 22:15:52.601813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-07-24 22:15:52.602125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-07-24 22:15:52.602166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-07-24 22:15:52.602566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-07-24 22:15:52.602605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-07-24 22:15:52.602973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-07-24 22:15:52.603015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-07-24 22:15:52.603394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-07-24 22:15:52.603435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-07-24 22:15:52.603824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-07-24 22:15:52.603865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-07-24 22:15:52.604227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-07-24 22:15:52.604266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-07-24 22:15:52.604668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-07-24 22:15:52.604708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-07-24 22:15:52.605092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-07-24 22:15:52.605138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-07-24 22:15:52.605523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-07-24 22:15:52.605568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-07-24 22:15:52.605895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-07-24 22:15:52.605937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-07-24 22:15:52.606327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-07-24 22:15:52.606367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-07-24 22:15:52.606760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-07-24 22:15:52.606801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-07-24 22:15:52.607055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-07-24 22:15:52.607096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-07-24 22:15:52.607401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-07-24 22:15:52.607440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-07-24 22:15:52.607700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-07-24 22:15:52.607751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-07-24 22:15:52.607980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-07-24 22:15:52.607992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-07-24 22:15:52.608325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-07-24 22:15:52.608365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-07-24 22:15:52.608768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-07-24 22:15:52.608810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-07-24 22:15:52.609152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-07-24 22:15:52.609192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-07-24 22:15:52.609498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-07-24 22:15:52.609538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-07-24 22:15:52.609858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-07-24 22:15:52.609871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-07-24 22:15:52.610140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-07-24 22:15:52.610153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-07-24 22:15:52.610581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-07-24 22:15:52.610621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-07-24 22:15:52.610920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.433 [2024-07-24 22:15:52.610933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.433 qpair failed and we were unable to recover it. 00:28:13.433 [2024-07-24 22:15:52.611163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.433 [2024-07-24 22:15:52.611176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.433 qpair failed and we were unable to recover it. 00:28:13.433 [2024-07-24 22:15:52.611447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.433 [2024-07-24 22:15:52.611459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.433 qpair failed and we were unable to recover it. 00:28:13.433 [2024-07-24 22:15:52.611801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.703 [2024-07-24 22:15:52.611842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.703 qpair failed and we were unable to recover it. 00:28:13.703 [2024-07-24 22:15:52.612164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.703 [2024-07-24 22:15:52.612206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.703 qpair failed and we were unable to recover it. 00:28:13.703 [2024-07-24 22:15:52.612592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.703 [2024-07-24 22:15:52.612632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.703 qpair failed and we were unable to recover it. 00:28:13.703 [2024-07-24 22:15:52.613037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.703 [2024-07-24 22:15:52.613077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.703 qpair failed and we were unable to recover it. 00:28:13.703 [2024-07-24 22:15:52.613438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.703 [2024-07-24 22:15:52.613478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.703 qpair failed and we were unable to recover it. 00:28:13.703 [2024-07-24 22:15:52.613793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.703 [2024-07-24 22:15:52.613834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.703 qpair failed and we were unable to recover it. 00:28:13.703 [2024-07-24 22:15:52.614124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.703 [2024-07-24 22:15:52.614164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.703 qpair failed and we were unable to recover it. 00:28:13.703 [2024-07-24 22:15:52.614549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.703 [2024-07-24 22:15:52.614596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.703 qpair failed and we were unable to recover it. 00:28:13.703 [2024-07-24 22:15:52.614931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.703 [2024-07-24 22:15:52.614973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.703 qpair failed and we were unable to recover it. 00:28:13.703 [2024-07-24 22:15:52.615341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.703 [2024-07-24 22:15:52.615381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.703 qpair failed and we were unable to recover it. 00:28:13.703 [2024-07-24 22:15:52.615767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.703 [2024-07-24 22:15:52.615809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.703 qpair failed and we were unable to recover it. 00:28:13.703 [2024-07-24 22:15:52.616191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.704 [2024-07-24 22:15:52.616231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.704 qpair failed and we were unable to recover it. 00:28:13.704 [2024-07-24 22:15:52.616618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.704 [2024-07-24 22:15:52.616658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.704 qpair failed and we were unable to recover it. 00:28:13.704 [2024-07-24 22:15:52.616946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.704 [2024-07-24 22:15:52.616959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.704 qpair failed and we were unable to recover it. 00:28:13.704 [2024-07-24 22:15:52.617341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.704 [2024-07-24 22:15:52.617381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.704 qpair failed and we were unable to recover it. 00:28:13.704 [2024-07-24 22:15:52.617747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.704 [2024-07-24 22:15:52.617783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.704 qpair failed and we were unable to recover it. 00:28:13.704 [2024-07-24 22:15:52.618108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.704 [2024-07-24 22:15:52.618149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.704 qpair failed and we were unable to recover it. 00:28:13.704 [2024-07-24 22:15:52.618440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.704 [2024-07-24 22:15:52.618480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.704 qpair failed and we were unable to recover it. 00:28:13.704 [2024-07-24 22:15:52.618841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.704 [2024-07-24 22:15:52.618882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.704 qpair failed and we were unable to recover it. 00:28:13.704 [2024-07-24 22:15:52.619272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.704 [2024-07-24 22:15:52.619311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.704 qpair failed and we were unable to recover it. 00:28:13.704 [2024-07-24 22:15:52.619704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.704 [2024-07-24 22:15:52.619755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.704 qpair failed and we were unable to recover it. 00:28:13.704 [2024-07-24 22:15:52.620131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.704 [2024-07-24 22:15:52.620177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.704 qpair failed and we were unable to recover it. 00:28:13.704 [2024-07-24 22:15:52.620563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.704 [2024-07-24 22:15:52.620603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.704 qpair failed and we were unable to recover it. 00:28:13.704 [2024-07-24 22:15:52.620958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.704 [2024-07-24 22:15:52.620971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.704 qpair failed and we were unable to recover it. 00:28:13.704 [2024-07-24 22:15:52.621295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.704 [2024-07-24 22:15:52.621336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.704 qpair failed and we were unable to recover it. 00:28:13.704 [2024-07-24 22:15:52.621650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.704 [2024-07-24 22:15:52.621690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.704 qpair failed and we were unable to recover it. 00:28:13.704 [2024-07-24 22:15:52.622080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.704 [2024-07-24 22:15:52.622093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.704 qpair failed and we were unable to recover it. 00:28:13.704 [2024-07-24 22:15:52.622373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.704 [2024-07-24 22:15:52.622412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.704 qpair failed and we were unable to recover it. 00:28:13.704 [2024-07-24 22:15:52.622677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.704 [2024-07-24 22:15:52.622734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.704 qpair failed and we were unable to recover it. 00:28:13.704 [2024-07-24 22:15:52.623092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.704 [2024-07-24 22:15:52.623132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.704 qpair failed and we were unable to recover it. 00:28:13.704 [2024-07-24 22:15:52.623514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.704 [2024-07-24 22:15:52.623553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.704 qpair failed and we were unable to recover it. 00:28:13.704 [2024-07-24 22:15:52.623930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.704 [2024-07-24 22:15:52.623943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.704 qpair failed and we were unable to recover it. 00:28:13.704 [2024-07-24 22:15:52.624283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.704 [2024-07-24 22:15:52.624323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.704 qpair failed and we were unable to recover it. 00:28:13.704 [2024-07-24 22:15:52.624655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.704 [2024-07-24 22:15:52.624695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.704 qpair failed and we were unable to recover it. 00:28:13.704 [2024-07-24 22:15:52.625089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.704 [2024-07-24 22:15:52.625130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.704 qpair failed and we were unable to recover it. 00:28:13.704 [2024-07-24 22:15:52.625516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.704 [2024-07-24 22:15:52.625556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.704 qpair failed and we were unable to recover it. 00:28:13.704 [2024-07-24 22:15:52.625935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.704 [2024-07-24 22:15:52.625977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.704 qpair failed and we were unable to recover it. 00:28:13.704 [2024-07-24 22:15:52.626268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.704 [2024-07-24 22:15:52.626281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.704 qpair failed and we were unable to recover it. 00:28:13.704 [2024-07-24 22:15:52.626595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.704 [2024-07-24 22:15:52.626634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.704 qpair failed and we were unable to recover it. 00:28:13.704 [2024-07-24 22:15:52.626981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.704 [2024-07-24 22:15:52.627022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.704 qpair failed and we were unable to recover it. 00:28:13.704 [2024-07-24 22:15:52.627416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.704 [2024-07-24 22:15:52.627456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.704 qpair failed and we were unable to recover it. 00:28:13.704 [2024-07-24 22:15:52.627841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.704 [2024-07-24 22:15:52.627882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.704 qpair failed and we were unable to recover it. 00:28:13.704 [2024-07-24 22:15:52.628270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.704 [2024-07-24 22:15:52.628310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.704 qpair failed and we were unable to recover it. 00:28:13.704 [2024-07-24 22:15:52.628612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.704 [2024-07-24 22:15:52.628652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.704 qpair failed and we were unable to recover it. 00:28:13.704 [2024-07-24 22:15:52.628974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.704 [2024-07-24 22:15:52.628986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.704 qpair failed and we were unable to recover it. 00:28:13.704 [2024-07-24 22:15:52.629301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.704 [2024-07-24 22:15:52.629341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.704 qpair failed and we were unable to recover it. 00:28:13.704 [2024-07-24 22:15:52.629734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.705 [2024-07-24 22:15:52.629775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.705 qpair failed and we were unable to recover it. 00:28:13.705 [2024-07-24 22:15:52.630083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.705 [2024-07-24 22:15:52.630097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.705 qpair failed and we were unable to recover it. 00:28:13.705 [2024-07-24 22:15:52.630338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.705 [2024-07-24 22:15:52.630351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.705 qpair failed and we were unable to recover it. 00:28:13.705 [2024-07-24 22:15:52.630535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.705 [2024-07-24 22:15:52.630548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.705 qpair failed and we were unable to recover it. 00:28:13.705 [2024-07-24 22:15:52.630892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.705 [2024-07-24 22:15:52.630933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.705 qpair failed and we were unable to recover it. 00:28:13.705 [2024-07-24 22:15:52.631316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.705 [2024-07-24 22:15:52.631355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.705 qpair failed and we were unable to recover it. 00:28:13.705 [2024-07-24 22:15:52.631741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.705 [2024-07-24 22:15:52.631783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.705 qpair failed and we were unable to recover it. 00:28:13.705 [2024-07-24 22:15:52.632085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.705 [2024-07-24 22:15:52.632097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.705 qpair failed and we were unable to recover it. 00:28:13.705 [2024-07-24 22:15:52.632451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.705 [2024-07-24 22:15:52.632491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.705 qpair failed and we were unable to recover it. 00:28:13.705 [2024-07-24 22:15:52.632877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.705 [2024-07-24 22:15:52.632918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.705 qpair failed and we were unable to recover it. 00:28:13.705 [2024-07-24 22:15:52.633286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.705 [2024-07-24 22:15:52.633327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.705 qpair failed and we were unable to recover it. 00:28:13.705 [2024-07-24 22:15:52.633712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.705 [2024-07-24 22:15:52.633774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.705 qpair failed and we were unable to recover it. 00:28:13.705 [2024-07-24 22:15:52.634143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.705 [2024-07-24 22:15:52.634184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.705 qpair failed and we were unable to recover it. 00:28:13.705 [2024-07-24 22:15:52.634491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.705 [2024-07-24 22:15:52.634531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.705 qpair failed and we were unable to recover it. 00:28:13.705 [2024-07-24 22:15:52.634831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.705 [2024-07-24 22:15:52.634873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.705 qpair failed and we were unable to recover it. 00:28:13.705 [2024-07-24 22:15:52.635262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.705 [2024-07-24 22:15:52.635314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.705 qpair failed and we were unable to recover it. 00:28:13.705 [2024-07-24 22:15:52.635687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.705 [2024-07-24 22:15:52.635739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.705 qpair failed and we were unable to recover it. 00:28:13.705 [2024-07-24 22:15:52.636093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.705 [2024-07-24 22:15:52.636133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.705 qpair failed and we were unable to recover it. 00:28:13.705 [2024-07-24 22:15:52.636531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.705 [2024-07-24 22:15:52.636572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.705 qpair failed and we were unable to recover it. 00:28:13.705 [2024-07-24 22:15:52.636805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.705 [2024-07-24 22:15:52.636846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.705 qpair failed and we were unable to recover it. 00:28:13.705 [2024-07-24 22:15:52.637231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.705 [2024-07-24 22:15:52.637271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.705 qpair failed and we were unable to recover it. 00:28:13.705 [2024-07-24 22:15:52.637639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.705 [2024-07-24 22:15:52.637679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.705 qpair failed and we were unable to recover it. 00:28:13.705 [2024-07-24 22:15:52.638083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.705 [2024-07-24 22:15:52.638124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.705 qpair failed and we were unable to recover it. 00:28:13.705 [2024-07-24 22:15:52.638449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.705 [2024-07-24 22:15:52.638490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.705 qpair failed and we were unable to recover it. 00:28:13.705 [2024-07-24 22:15:52.638878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.705 [2024-07-24 22:15:52.638918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.705 qpair failed and we were unable to recover it. 00:28:13.705 [2024-07-24 22:15:52.639302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.705 [2024-07-24 22:15:52.639342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.705 qpair failed and we were unable to recover it. 00:28:13.705 [2024-07-24 22:15:52.639676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.705 [2024-07-24 22:15:52.639725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.705 qpair failed and we were unable to recover it. 00:28:13.705 [2024-07-24 22:15:52.640116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.705 [2024-07-24 22:15:52.640156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.705 qpair failed and we were unable to recover it. 00:28:13.705 [2024-07-24 22:15:52.640538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.705 [2024-07-24 22:15:52.640577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.705 qpair failed and we were unable to recover it. 00:28:13.705 [2024-07-24 22:15:52.640968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.705 [2024-07-24 22:15:52.641009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.705 qpair failed and we were unable to recover it. 00:28:13.705 [2024-07-24 22:15:52.641411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.705 [2024-07-24 22:15:52.641453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.705 qpair failed and we were unable to recover it. 00:28:13.705 [2024-07-24 22:15:52.641816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.705 [2024-07-24 22:15:52.641857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.705 qpair failed and we were unable to recover it. 00:28:13.705 [2024-07-24 22:15:52.642243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.705 [2024-07-24 22:15:52.642283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.705 qpair failed and we were unable to recover it. 00:28:13.705 [2024-07-24 22:15:52.642679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.705 [2024-07-24 22:15:52.642742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.705 qpair failed and we were unable to recover it. 00:28:13.705 [2024-07-24 22:15:52.643127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.705 [2024-07-24 22:15:52.643167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.705 qpair failed and we were unable to recover it. 00:28:13.705 [2024-07-24 22:15:52.643552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.706 [2024-07-24 22:15:52.643592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.706 qpair failed and we were unable to recover it. 00:28:13.706 [2024-07-24 22:15:52.643966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.706 [2024-07-24 22:15:52.643979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.706 qpair failed and we were unable to recover it. 00:28:13.706 [2024-07-24 22:15:52.644271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.706 [2024-07-24 22:15:52.644310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.706 qpair failed and we were unable to recover it. 00:28:13.706 [2024-07-24 22:15:52.644620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.706 [2024-07-24 22:15:52.644661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.706 qpair failed and we were unable to recover it. 00:28:13.706 [2024-07-24 22:15:52.645053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.706 [2024-07-24 22:15:52.645094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.706 qpair failed and we were unable to recover it. 00:28:13.706 [2024-07-24 22:15:52.645477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.706 [2024-07-24 22:15:52.645517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.706 qpair failed and we were unable to recover it. 00:28:13.706 [2024-07-24 22:15:52.645925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.706 [2024-07-24 22:15:52.645966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.706 qpair failed and we were unable to recover it. 00:28:13.706 [2024-07-24 22:15:52.646265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.706 [2024-07-24 22:15:52.646278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.706 qpair failed and we were unable to recover it. 00:28:13.706 [2024-07-24 22:15:52.646576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.706 [2024-07-24 22:15:52.646620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.706 qpair failed and we were unable to recover it. 00:28:13.706 [2024-07-24 22:15:52.646984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.706 [2024-07-24 22:15:52.647025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.706 qpair failed and we were unable to recover it. 00:28:13.706 [2024-07-24 22:15:52.647409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.706 [2024-07-24 22:15:52.647449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.706 qpair failed and we were unable to recover it. 00:28:13.706 [2024-07-24 22:15:52.647756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.706 [2024-07-24 22:15:52.647797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.706 qpair failed and we were unable to recover it. 00:28:13.706 [2024-07-24 22:15:52.648098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.706 [2024-07-24 22:15:52.648111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.706 qpair failed and we were unable to recover it. 00:28:13.706 [2024-07-24 22:15:52.648447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.706 [2024-07-24 22:15:52.648486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.706 qpair failed and we were unable to recover it. 00:28:13.706 [2024-07-24 22:15:52.648853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.706 [2024-07-24 22:15:52.648894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.706 qpair failed and we were unable to recover it. 00:28:13.706 [2024-07-24 22:15:52.649177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.706 [2024-07-24 22:15:52.649191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.706 qpair failed and we were unable to recover it. 00:28:13.706 [2024-07-24 22:15:52.649375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.706 [2024-07-24 22:15:52.649388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.706 qpair failed and we were unable to recover it. 00:28:13.706 [2024-07-24 22:15:52.649728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.706 [2024-07-24 22:15:52.649769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.706 qpair failed and we were unable to recover it. 00:28:13.706 [2024-07-24 22:15:52.650086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.706 [2024-07-24 22:15:52.650127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.706 qpair failed and we were unable to recover it. 00:28:13.706 [2024-07-24 22:15:52.650495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.706 [2024-07-24 22:15:52.650536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.706 qpair failed and we were unable to recover it. 00:28:13.706 [2024-07-24 22:15:52.650898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.706 [2024-07-24 22:15:52.650944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.706 qpair failed and we were unable to recover it. 00:28:13.706 [2024-07-24 22:15:52.651348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.706 [2024-07-24 22:15:52.651388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.706 qpair failed and we were unable to recover it. 00:28:13.706 [2024-07-24 22:15:52.651775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.706 [2024-07-24 22:15:52.651816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.706 qpair failed and we were unable to recover it. 00:28:13.706 [2024-07-24 22:15:52.652207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.706 [2024-07-24 22:15:52.652248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.706 qpair failed and we were unable to recover it. 00:28:13.706 [2024-07-24 22:15:52.652618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.706 [2024-07-24 22:15:52.652658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.706 qpair failed and we were unable to recover it. 00:28:13.706 [2024-07-24 22:15:52.653011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.706 [2024-07-24 22:15:52.653053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.706 qpair failed and we were unable to recover it. 00:28:13.706 [2024-07-24 22:15:52.653452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.706 [2024-07-24 22:15:52.653492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.706 qpair failed and we were unable to recover it. 00:28:13.706 [2024-07-24 22:15:52.653854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.706 [2024-07-24 22:15:52.653872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.706 qpair failed and we were unable to recover it. 00:28:13.706 [2024-07-24 22:15:52.654212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.706 [2024-07-24 22:15:52.654254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.706 qpair failed and we were unable to recover it. 00:28:13.706 [2024-07-24 22:15:52.654703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.706 [2024-07-24 22:15:52.654754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.706 qpair failed and we were unable to recover it. 00:28:13.706 [2024-07-24 22:15:52.655150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.706 [2024-07-24 22:15:52.655191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.706 qpair failed and we were unable to recover it. 00:28:13.706 [2024-07-24 22:15:52.655583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.706 [2024-07-24 22:15:52.655623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.706 qpair failed and we were unable to recover it. 00:28:13.706 [2024-07-24 22:15:52.655999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.706 [2024-07-24 22:15:52.656040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.706 qpair failed and we were unable to recover it. 00:28:13.706 [2024-07-24 22:15:52.656433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.706 [2024-07-24 22:15:52.656473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.706 qpair failed and we were unable to recover it. 00:28:13.706 [2024-07-24 22:15:52.656864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.706 [2024-07-24 22:15:52.656904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.706 qpair failed and we were unable to recover it. 00:28:13.707 [2024-07-24 22:15:52.657173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.707 [2024-07-24 22:15:52.657185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.707 qpair failed and we were unable to recover it. 00:28:13.707 [2024-07-24 22:15:52.657520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.707 [2024-07-24 22:15:52.657560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.707 qpair failed and we were unable to recover it. 00:28:13.707 [2024-07-24 22:15:52.657890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.707 [2024-07-24 22:15:52.657930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.707 qpair failed and we were unable to recover it. 00:28:13.707 [2024-07-24 22:15:52.658216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.707 [2024-07-24 22:15:52.658228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.707 qpair failed and we were unable to recover it. 00:28:13.707 [2024-07-24 22:15:52.658560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.707 [2024-07-24 22:15:52.658601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.707 qpair failed and we were unable to recover it. 00:28:13.707 [2024-07-24 22:15:52.659003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.707 [2024-07-24 22:15:52.659045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.707 qpair failed and we were unable to recover it. 00:28:13.707 [2024-07-24 22:15:52.659313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.707 [2024-07-24 22:15:52.659325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.707 qpair failed and we were unable to recover it. 00:28:13.707 [2024-07-24 22:15:52.659579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.707 [2024-07-24 22:15:52.659591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.707 qpair failed and we were unable to recover it. 00:28:13.707 [2024-07-24 22:15:52.659908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.707 [2024-07-24 22:15:52.659922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.707 qpair failed and we were unable to recover it. 00:28:13.707 [2024-07-24 22:15:52.660294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.707 [2024-07-24 22:15:52.660307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.707 qpair failed and we were unable to recover it. 00:28:13.707 [2024-07-24 22:15:52.660632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.707 [2024-07-24 22:15:52.660645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.707 qpair failed and we were unable to recover it. 00:28:13.707 [2024-07-24 22:15:52.660841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.707 [2024-07-24 22:15:52.660854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.707 qpair failed and we were unable to recover it. 00:28:13.707 [2024-07-24 22:15:52.661160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.707 [2024-07-24 22:15:52.661173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.707 qpair failed and we were unable to recover it. 00:28:13.707 [2024-07-24 22:15:52.661358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.707 [2024-07-24 22:15:52.661371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.707 qpair failed and we were unable to recover it. 00:28:13.707 [2024-07-24 22:15:52.661641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.707 [2024-07-24 22:15:52.661653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.707 qpair failed and we were unable to recover it. 00:28:13.707 [2024-07-24 22:15:52.661924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.707 [2024-07-24 22:15:52.661937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.707 qpair failed and we were unable to recover it. 00:28:13.707 [2024-07-24 22:15:52.662249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.707 [2024-07-24 22:15:52.662262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.707 qpair failed and we were unable to recover it. 00:28:13.707 [2024-07-24 22:15:52.662494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.707 [2024-07-24 22:15:52.662506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.707 qpair failed and we were unable to recover it. 00:28:13.707 [2024-07-24 22:15:52.662808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.707 [2024-07-24 22:15:52.662821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.707 qpair failed and we were unable to recover it. 00:28:13.707 [2024-07-24 22:15:52.663103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.707 [2024-07-24 22:15:52.663116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.707 qpair failed and we were unable to recover it. 00:28:13.707 [2024-07-24 22:15:52.663372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.707 [2024-07-24 22:15:52.663384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.707 qpair failed and we were unable to recover it. 00:28:13.707 [2024-07-24 22:15:52.663706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.707 [2024-07-24 22:15:52.663725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.707 qpair failed and we were unable to recover it. 00:28:13.707 [2024-07-24 22:15:52.663920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.707 [2024-07-24 22:15:52.663933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.707 qpair failed and we were unable to recover it. 00:28:13.707 [2024-07-24 22:15:52.664200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.707 [2024-07-24 22:15:52.664213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.707 qpair failed and we were unable to recover it. 00:28:13.707 [2024-07-24 22:15:52.664390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.707 [2024-07-24 22:15:52.664403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.707 qpair failed and we were unable to recover it. 00:28:13.707 [2024-07-24 22:15:52.664727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.707 [2024-07-24 22:15:52.664743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.707 qpair failed and we were unable to recover it. 00:28:13.707 [2024-07-24 22:15:52.664998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.707 [2024-07-24 22:15:52.665012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.707 qpair failed and we were unable to recover it. 00:28:13.707 [2024-07-24 22:15:52.665246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.707 [2024-07-24 22:15:52.665259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.707 qpair failed and we were unable to recover it. 00:28:13.707 [2024-07-24 22:15:52.665569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.707 [2024-07-24 22:15:52.665582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.707 qpair failed and we were unable to recover it. 00:28:13.707 [2024-07-24 22:15:52.665824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.707 [2024-07-24 22:15:52.665836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.707 qpair failed and we were unable to recover it. 00:28:13.707 [2024-07-24 22:15:52.666165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.707 [2024-07-24 22:15:52.666178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.707 qpair failed and we were unable to recover it. 00:28:13.707 [2024-07-24 22:15:52.666446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.707 [2024-07-24 22:15:52.666458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.707 qpair failed and we were unable to recover it. 00:28:13.707 [2024-07-24 22:15:52.666779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.707 [2024-07-24 22:15:52.666793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.707 qpair failed and we were unable to recover it. 00:28:13.707 [2024-07-24 22:15:52.667119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.707 [2024-07-24 22:15:52.667132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.707 qpair failed and we were unable to recover it. 00:28:13.707 [2024-07-24 22:15:52.667460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.707 [2024-07-24 22:15:52.667473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.707 qpair failed and we were unable to recover it. 00:28:13.707 [2024-07-24 22:15:52.667746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.708 [2024-07-24 22:15:52.667759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.708 qpair failed and we were unable to recover it. 00:28:13.708 [2024-07-24 22:15:52.667993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.708 [2024-07-24 22:15:52.668006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.708 qpair failed and we were unable to recover it. 00:28:13.708 [2024-07-24 22:15:52.668324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.708 [2024-07-24 22:15:52.668337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.708 qpair failed and we were unable to recover it. 00:28:13.708 [2024-07-24 22:15:52.668592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.708 [2024-07-24 22:15:52.668605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.708 qpair failed and we were unable to recover it. 00:28:13.708 [2024-07-24 22:15:52.668930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.708 [2024-07-24 22:15:52.668944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.708 qpair failed and we were unable to recover it. 00:28:13.708 [2024-07-24 22:15:52.669196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.708 [2024-07-24 22:15:52.669208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.708 qpair failed and we were unable to recover it. 00:28:13.708 [2024-07-24 22:15:52.669531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.708 [2024-07-24 22:15:52.669544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.708 qpair failed and we were unable to recover it. 00:28:13.708 [2024-07-24 22:15:52.669789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.708 [2024-07-24 22:15:52.669802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.708 qpair failed and we were unable to recover it. 00:28:13.708 [2024-07-24 22:15:52.670109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.708 [2024-07-24 22:15:52.670122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.708 qpair failed and we were unable to recover it. 00:28:13.708 [2024-07-24 22:15:52.670453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.708 [2024-07-24 22:15:52.670466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.708 qpair failed and we were unable to recover it. 00:28:13.708 [2024-07-24 22:15:52.670719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.708 [2024-07-24 22:15:52.670732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.708 qpair failed and we were unable to recover it. 00:28:13.708 [2024-07-24 22:15:52.671055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.708 [2024-07-24 22:15:52.671068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.708 qpair failed and we were unable to recover it. 00:28:13.708 [2024-07-24 22:15:52.671315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.708 [2024-07-24 22:15:52.671328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.708 qpair failed and we were unable to recover it. 00:28:13.708 [2024-07-24 22:15:52.671651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.708 [2024-07-24 22:15:52.671664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.708 qpair failed and we were unable to recover it. 00:28:13.708 [2024-07-24 22:15:52.671871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.708 [2024-07-24 22:15:52.671884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.708 qpair failed and we were unable to recover it. 00:28:13.708 [2024-07-24 22:15:52.672211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.708 [2024-07-24 22:15:52.672224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.708 qpair failed and we were unable to recover it. 00:28:13.708 [2024-07-24 22:15:52.672564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.708 [2024-07-24 22:15:52.672576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.708 qpair failed and we were unable to recover it. 00:28:13.708 [2024-07-24 22:15:52.672928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.708 [2024-07-24 22:15:52.672941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.708 qpair failed and we were unable to recover it. 00:28:13.708 [2024-07-24 22:15:52.673122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.708 [2024-07-24 22:15:52.673134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.708 qpair failed and we were unable to recover it. 00:28:13.708 [2024-07-24 22:15:52.673432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.708 [2024-07-24 22:15:52.673445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.708 qpair failed and we were unable to recover it. 00:28:13.708 [2024-07-24 22:15:52.673707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.708 [2024-07-24 22:15:52.673741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.708 qpair failed and we were unable to recover it. 00:28:13.708 [2024-07-24 22:15:52.674047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.708 [2024-07-24 22:15:52.674060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.708 qpair failed and we were unable to recover it. 00:28:13.708 [2024-07-24 22:15:52.674307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.708 [2024-07-24 22:15:52.674320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.708 qpair failed and we were unable to recover it. 00:28:13.708 [2024-07-24 22:15:52.674502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.708 [2024-07-24 22:15:52.674515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.708 qpair failed and we were unable to recover it. 00:28:13.708 [2024-07-24 22:15:52.674770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.708 [2024-07-24 22:15:52.674783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.708 qpair failed and we were unable to recover it. 00:28:13.708 [2024-07-24 22:15:52.674945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.708 [2024-07-24 22:15:52.674958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.708 qpair failed and we were unable to recover it. 00:28:13.708 [2024-07-24 22:15:52.675259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.708 [2024-07-24 22:15:52.675271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.708 qpair failed and we were unable to recover it. 00:28:13.708 [2024-07-24 22:15:52.675627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.708 [2024-07-24 22:15:52.675640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.708 qpair failed and we were unable to recover it. 00:28:13.709 [2024-07-24 22:15:52.675899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.709 [2024-07-24 22:15:52.675912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.709 qpair failed and we were unable to recover it. 00:28:13.709 [2024-07-24 22:15:52.676242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.709 [2024-07-24 22:15:52.676255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.709 qpair failed and we were unable to recover it. 00:28:13.709 [2024-07-24 22:15:52.676579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.709 [2024-07-24 22:15:52.676593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.709 qpair failed and we were unable to recover it. 00:28:13.709 [2024-07-24 22:15:52.676950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.709 [2024-07-24 22:15:52.676963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.709 qpair failed and we were unable to recover it. 00:28:13.709 [2024-07-24 22:15:52.677189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.709 [2024-07-24 22:15:52.677202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.709 qpair failed and we were unable to recover it. 00:28:13.709 [2024-07-24 22:15:52.677430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.709 [2024-07-24 22:15:52.677442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.709 qpair failed and we were unable to recover it. 00:28:13.709 [2024-07-24 22:15:52.677709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.709 [2024-07-24 22:15:52.677727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.709 qpair failed and we were unable to recover it. 00:28:13.709 [2024-07-24 22:15:52.677953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.709 [2024-07-24 22:15:52.677966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.709 qpair failed and we were unable to recover it. 00:28:13.709 [2024-07-24 22:15:52.678266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.709 [2024-07-24 22:15:52.678279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.709 qpair failed and we were unable to recover it. 00:28:13.709 [2024-07-24 22:15:52.678511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.709 [2024-07-24 22:15:52.678523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.709 qpair failed and we were unable to recover it. 00:28:13.709 [2024-07-24 22:15:52.678707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.709 [2024-07-24 22:15:52.678725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.709 qpair failed and we were unable to recover it. 00:28:13.709 [2024-07-24 22:15:52.678990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.709 [2024-07-24 22:15:52.679003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.709 qpair failed and we were unable to recover it. 00:28:13.709 [2024-07-24 22:15:52.679243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.709 [2024-07-24 22:15:52.679256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.709 qpair failed and we were unable to recover it. 00:28:13.709 [2024-07-24 22:15:52.679596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.709 [2024-07-24 22:15:52.679609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.709 qpair failed and we were unable to recover it. 00:28:13.709 [2024-07-24 22:15:52.679915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.709 [2024-07-24 22:15:52.679928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.709 qpair failed and we were unable to recover it. 00:28:13.709 [2024-07-24 22:15:52.680227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.709 [2024-07-24 22:15:52.680240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.709 qpair failed and we were unable to recover it. 00:28:13.709 [2024-07-24 22:15:52.680543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.709 [2024-07-24 22:15:52.680556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.709 qpair failed and we were unable to recover it. 00:28:13.709 [2024-07-24 22:15:52.680802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.709 [2024-07-24 22:15:52.680816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.709 qpair failed and we were unable to recover it. 00:28:13.709 [2024-07-24 22:15:52.681043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.709 [2024-07-24 22:15:52.681055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.709 qpair failed and we were unable to recover it. 00:28:13.709 [2024-07-24 22:15:52.681399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.709 [2024-07-24 22:15:52.681412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.709 qpair failed and we were unable to recover it. 00:28:13.709 [2024-07-24 22:15:52.681722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.709 [2024-07-24 22:15:52.681735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.709 qpair failed and we were unable to recover it. 00:28:13.709 [2024-07-24 22:15:52.682070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.709 [2024-07-24 22:15:52.682082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.709 qpair failed and we were unable to recover it. 00:28:13.709 [2024-07-24 22:15:52.682334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.709 [2024-07-24 22:15:52.682346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.709 qpair failed and we were unable to recover it. 00:28:13.709 [2024-07-24 22:15:52.682653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.709 [2024-07-24 22:15:52.682666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.709 qpair failed and we were unable to recover it. 00:28:13.709 [2024-07-24 22:15:52.682898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.709 [2024-07-24 22:15:52.682912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.709 qpair failed and we were unable to recover it. 00:28:13.709 [2024-07-24 22:15:52.683243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.709 [2024-07-24 22:15:52.683255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.709 qpair failed and we were unable to recover it. 00:28:13.709 [2024-07-24 22:15:52.683489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.709 [2024-07-24 22:15:52.683501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.709 qpair failed and we were unable to recover it. 00:28:13.709 [2024-07-24 22:15:52.683838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.709 [2024-07-24 22:15:52.683851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.709 qpair failed and we were unable to recover it. 00:28:13.709 [2024-07-24 22:15:52.684079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.709 [2024-07-24 22:15:52.684091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.709 qpair failed and we were unable to recover it. 00:28:13.709 [2024-07-24 22:15:52.684419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.709 [2024-07-24 22:15:52.684432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.709 qpair failed and we were unable to recover it. 00:28:13.709 [2024-07-24 22:15:52.684733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.709 [2024-07-24 22:15:52.684745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.709 qpair failed and we were unable to recover it. 00:28:13.709 [2024-07-24 22:15:52.684998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.709 [2024-07-24 22:15:52.685011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.709 qpair failed and we were unable to recover it. 00:28:13.709 [2024-07-24 22:15:52.685264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.709 [2024-07-24 22:15:52.685276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.709 qpair failed and we were unable to recover it. 00:28:13.709 [2024-07-24 22:15:52.685601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.709 [2024-07-24 22:15:52.685614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.709 qpair failed and we were unable to recover it. 00:28:13.709 [2024-07-24 22:15:52.685931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.709 [2024-07-24 22:15:52.685944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.709 qpair failed and we were unable to recover it. 00:28:13.709 [2024-07-24 22:15:52.686176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.709 [2024-07-24 22:15:52.686189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.709 qpair failed and we were unable to recover it. 00:28:13.710 [2024-07-24 22:15:52.686420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.710 [2024-07-24 22:15:52.686432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.710 qpair failed and we were unable to recover it. 00:28:13.710 [2024-07-24 22:15:52.686736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.710 [2024-07-24 22:15:52.686749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.710 qpair failed and we were unable to recover it. 00:28:13.710 [2024-07-24 22:15:52.687003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.710 [2024-07-24 22:15:52.687015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.710 qpair failed and we were unable to recover it. 00:28:13.710 [2024-07-24 22:15:52.687257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.710 [2024-07-24 22:15:52.687269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.710 qpair failed and we were unable to recover it. 00:28:13.710 [2024-07-24 22:15:52.687576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.710 [2024-07-24 22:15:52.687588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.710 qpair failed and we were unable to recover it. 00:28:13.710 [2024-07-24 22:15:52.687892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.710 [2024-07-24 22:15:52.687905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.710 qpair failed and we were unable to recover it. 00:28:13.710 [2024-07-24 22:15:52.688170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.710 [2024-07-24 22:15:52.688185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.710 qpair failed and we were unable to recover it. 00:28:13.710 [2024-07-24 22:15:52.688505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.710 [2024-07-24 22:15:52.688517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.710 qpair failed and we were unable to recover it. 00:28:13.710 [2024-07-24 22:15:52.688878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.710 [2024-07-24 22:15:52.688890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.710 qpair failed and we were unable to recover it. 00:28:13.710 [2024-07-24 22:15:52.689221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.710 [2024-07-24 22:15:52.689234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.710 qpair failed and we were unable to recover it. 00:28:13.710 [2024-07-24 22:15:52.689483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.710 [2024-07-24 22:15:52.689496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.710 qpair failed and we were unable to recover it. 00:28:13.710 [2024-07-24 22:15:52.689796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.710 [2024-07-24 22:15:52.689809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.710 qpair failed and we were unable to recover it. 00:28:13.710 [2024-07-24 22:15:52.690128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.710 [2024-07-24 22:15:52.690141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.710 qpair failed and we were unable to recover it. 00:28:13.710 [2024-07-24 22:15:52.690403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.710 [2024-07-24 22:15:52.690416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.710 qpair failed and we were unable to recover it. 00:28:13.710 [2024-07-24 22:15:52.690686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.710 [2024-07-24 22:15:52.690698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.710 qpair failed and we were unable to recover it. 00:28:13.710 [2024-07-24 22:15:52.690989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.710 [2024-07-24 22:15:52.691002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.710 qpair failed and we were unable to recover it. 00:28:13.710 [2024-07-24 22:15:52.691235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.710 [2024-07-24 22:15:52.691248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.710 qpair failed and we were unable to recover it. 00:28:13.710 [2024-07-24 22:15:52.691518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.710 [2024-07-24 22:15:52.691531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.710 qpair failed and we were unable to recover it. 00:28:13.710 [2024-07-24 22:15:52.691791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.710 [2024-07-24 22:15:52.691803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.710 qpair failed and we were unable to recover it. 00:28:13.710 [2024-07-24 22:15:52.692137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.710 [2024-07-24 22:15:52.692149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.710 qpair failed and we were unable to recover it. 00:28:13.710 [2024-07-24 22:15:52.692413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.710 [2024-07-24 22:15:52.692426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.710 qpair failed and we were unable to recover it. 00:28:13.710 [2024-07-24 22:15:52.692742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.710 [2024-07-24 22:15:52.692756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.710 qpair failed and we were unable to recover it. 00:28:13.710 [2024-07-24 22:15:52.693076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.710 [2024-07-24 22:15:52.693088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.710 qpair failed and we were unable to recover it. 00:28:13.710 [2024-07-24 22:15:52.693334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.710 [2024-07-24 22:15:52.693346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.710 qpair failed and we were unable to recover it. 00:28:13.710 [2024-07-24 22:15:52.693651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.710 [2024-07-24 22:15:52.693663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.710 qpair failed and we were unable to recover it. 00:28:13.710 [2024-07-24 22:15:52.693929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.710 [2024-07-24 22:15:52.693943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.710 qpair failed and we were unable to recover it. 00:28:13.710 [2024-07-24 22:15:52.694265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.710 [2024-07-24 22:15:52.694278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.710 qpair failed and we were unable to recover it. 00:28:13.710 [2024-07-24 22:15:52.694598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.710 [2024-07-24 22:15:52.694610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.710 qpair failed and we were unable to recover it. 00:28:13.710 [2024-07-24 22:15:52.694921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.710 [2024-07-24 22:15:52.694934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.710 qpair failed and we were unable to recover it. 00:28:13.710 [2024-07-24 22:15:52.695119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.710 [2024-07-24 22:15:52.695132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.710 qpair failed and we were unable to recover it. 00:28:13.710 [2024-07-24 22:15:52.695426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.710 [2024-07-24 22:15:52.695438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.710 qpair failed and we were unable to recover it. 00:28:13.710 [2024-07-24 22:15:52.695708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.710 [2024-07-24 22:15:52.695726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.710 qpair failed and we were unable to recover it. 00:28:13.710 [2024-07-24 22:15:52.695912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.710 [2024-07-24 22:15:52.695924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.710 qpair failed and we were unable to recover it. 00:28:13.710 [2024-07-24 22:15:52.696254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.710 [2024-07-24 22:15:52.696266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.710 qpair failed and we were unable to recover it. 00:28:13.710 [2024-07-24 22:15:52.696574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.710 [2024-07-24 22:15:52.696587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.710 qpair failed and we were unable to recover it. 00:28:13.710 [2024-07-24 22:15:52.696924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.711 [2024-07-24 22:15:52.696936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.711 qpair failed and we were unable to recover it. 00:28:13.711 [2024-07-24 22:15:52.697276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.711 [2024-07-24 22:15:52.697288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.711 qpair failed and we were unable to recover it. 00:28:13.711 [2024-07-24 22:15:52.697588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.711 [2024-07-24 22:15:52.697601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.711 qpair failed and we were unable to recover it. 00:28:13.711 [2024-07-24 22:15:52.697924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.711 [2024-07-24 22:15:52.697965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.711 qpair failed and we were unable to recover it. 00:28:13.711 [2024-07-24 22:15:52.698198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.711 [2024-07-24 22:15:52.698238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.711 qpair failed and we were unable to recover it. 00:28:13.711 [2024-07-24 22:15:52.698550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.711 [2024-07-24 22:15:52.698590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.711 qpair failed and we were unable to recover it. 00:28:13.711 [2024-07-24 22:15:52.698977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.711 [2024-07-24 22:15:52.699019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.711 qpair failed and we were unable to recover it. 00:28:13.711 [2024-07-24 22:15:52.699358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.711 [2024-07-24 22:15:52.699399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.711 qpair failed and we were unable to recover it. 00:28:13.711 [2024-07-24 22:15:52.699737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.711 [2024-07-24 22:15:52.699778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.711 qpair failed and we were unable to recover it. 00:28:13.711 [2024-07-24 22:15:52.700072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.711 [2024-07-24 22:15:52.700084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.711 qpair failed and we were unable to recover it. 00:28:13.711 [2024-07-24 22:15:52.700315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.711 [2024-07-24 22:15:52.700328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.711 qpair failed and we were unable to recover it. 00:28:13.711 [2024-07-24 22:15:52.700623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.711 [2024-07-24 22:15:52.700639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.711 qpair failed and we were unable to recover it. 00:28:13.711 [2024-07-24 22:15:52.700935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.711 [2024-07-24 22:15:52.700947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.711 qpair failed and we were unable to recover it. 00:28:13.711 [2024-07-24 22:15:52.701303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.711 [2024-07-24 22:15:52.701343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.711 qpair failed and we were unable to recover it. 00:28:13.711 [2024-07-24 22:15:52.701708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.711 [2024-07-24 22:15:52.701759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.711 qpair failed and we were unable to recover it. 00:28:13.711 [2024-07-24 22:15:52.702146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.711 [2024-07-24 22:15:52.702185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.711 qpair failed and we were unable to recover it. 00:28:13.711 [2024-07-24 22:15:52.702464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.711 [2024-07-24 22:15:52.702504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.711 qpair failed and we were unable to recover it. 00:28:13.711 [2024-07-24 22:15:52.702790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.711 [2024-07-24 22:15:52.702830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.711 qpair failed and we were unable to recover it. 00:28:13.711 [2024-07-24 22:15:52.703083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.711 [2024-07-24 22:15:52.703124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.711 qpair failed and we were unable to recover it. 00:28:13.711 [2024-07-24 22:15:52.703437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.711 [2024-07-24 22:15:52.703495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.711 qpair failed and we were unable to recover it. 00:28:13.711 [2024-07-24 22:15:52.703895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.711 [2024-07-24 22:15:52.703936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.711 qpair failed and we were unable to recover it. 00:28:13.711 [2024-07-24 22:15:52.704217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.711 [2024-07-24 22:15:52.704229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.711 qpair failed and we were unable to recover it. 00:28:13.711 [2024-07-24 22:15:52.704563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.711 [2024-07-24 22:15:52.704603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.711 qpair failed and we were unable to recover it. 00:28:13.711 [2024-07-24 22:15:52.704959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.711 [2024-07-24 22:15:52.705000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.711 qpair failed and we were unable to recover it. 00:28:13.711 [2024-07-24 22:15:52.705410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.711 [2024-07-24 22:15:52.705450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.711 qpair failed and we were unable to recover it. 00:28:13.711 [2024-07-24 22:15:52.705739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.711 [2024-07-24 22:15:52.705779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.711 qpair failed and we were unable to recover it. 00:28:13.711 [2024-07-24 22:15:52.706149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.711 [2024-07-24 22:15:52.706190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.711 qpair failed and we were unable to recover it. 00:28:13.711 [2024-07-24 22:15:52.706567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.711 [2024-07-24 22:15:52.706607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.711 qpair failed and we were unable to recover it. 00:28:13.711 [2024-07-24 22:15:52.706912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.711 [2024-07-24 22:15:52.706953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.711 qpair failed and we were unable to recover it. 00:28:13.711 [2024-07-24 22:15:52.707331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.711 [2024-07-24 22:15:52.707372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.711 qpair failed and we were unable to recover it. 00:28:13.711 [2024-07-24 22:15:52.707736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.711 [2024-07-24 22:15:52.707778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.711 qpair failed and we were unable to recover it. 00:28:13.711 [2024-07-24 22:15:52.708157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.711 [2024-07-24 22:15:52.708206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.711 qpair failed and we were unable to recover it. 00:28:13.711 [2024-07-24 22:15:52.708498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.711 [2024-07-24 22:15:52.708510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.711 qpair failed and we were unable to recover it. 00:28:13.711 [2024-07-24 22:15:52.708868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.711 [2024-07-24 22:15:52.708909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.711 qpair failed and we were unable to recover it. 00:28:13.711 [2024-07-24 22:15:52.709296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.711 [2024-07-24 22:15:52.709335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.711 qpair failed and we were unable to recover it. 00:28:13.711 [2024-07-24 22:15:52.709695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.711 [2024-07-24 22:15:52.709744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.712 qpair failed and we were unable to recover it. 00:28:13.712 [2024-07-24 22:15:52.710124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.712 [2024-07-24 22:15:52.710165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.712 qpair failed and we were unable to recover it. 00:28:13.712 [2024-07-24 22:15:52.710448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.712 [2024-07-24 22:15:52.710487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.712 qpair failed and we were unable to recover it. 00:28:13.712 [2024-07-24 22:15:52.710781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.712 [2024-07-24 22:15:52.710823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.712 qpair failed and we were unable to recover it. 00:28:13.712 [2024-07-24 22:15:52.711205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.712 [2024-07-24 22:15:52.711245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.712 qpair failed and we were unable to recover it. 00:28:13.712 [2024-07-24 22:15:52.711547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.712 [2024-07-24 22:15:52.711587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.712 qpair failed and we were unable to recover it. 00:28:13.712 [2024-07-24 22:15:52.711813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.712 [2024-07-24 22:15:52.711853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.712 qpair failed and we were unable to recover it. 00:28:13.712 [2024-07-24 22:15:52.712250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.712 [2024-07-24 22:15:52.712290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.712 qpair failed and we were unable to recover it. 00:28:13.712 [2024-07-24 22:15:52.712666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.712 [2024-07-24 22:15:52.712706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.712 qpair failed and we were unable to recover it. 00:28:13.712 [2024-07-24 22:15:52.713096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.712 [2024-07-24 22:15:52.713136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.712 qpair failed and we were unable to recover it. 00:28:13.712 [2024-07-24 22:15:52.713437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.712 [2024-07-24 22:15:52.713477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.712 qpair failed and we were unable to recover it. 00:28:13.712 [2024-07-24 22:15:52.713859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.712 [2024-07-24 22:15:52.713918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.712 qpair failed and we were unable to recover it. 00:28:13.712 [2024-07-24 22:15:52.714270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.712 [2024-07-24 22:15:52.714309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.712 qpair failed and we were unable to recover it. 00:28:13.712 [2024-07-24 22:15:52.714616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.712 [2024-07-24 22:15:52.714655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.712 qpair failed and we were unable to recover it. 00:28:13.712 [2024-07-24 22:15:52.715042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.712 [2024-07-24 22:15:52.715082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.712 qpair failed and we were unable to recover it. 00:28:13.712 [2024-07-24 22:15:52.715380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.712 [2024-07-24 22:15:52.715420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.712 qpair failed and we were unable to recover it. 00:28:13.712 [2024-07-24 22:15:52.715741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.712 [2024-07-24 22:15:52.715788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.712 qpair failed and we were unable to recover it. 00:28:13.712 [2024-07-24 22:15:52.716090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.712 [2024-07-24 22:15:52.716130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.712 qpair failed and we were unable to recover it. 00:28:13.712 [2024-07-24 22:15:52.716507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.712 [2024-07-24 22:15:52.716546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.712 qpair failed and we were unable to recover it. 00:28:13.712 [2024-07-24 22:15:52.716923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.712 [2024-07-24 22:15:52.716964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.712 qpair failed and we were unable to recover it. 00:28:13.712 [2024-07-24 22:15:52.717341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.712 [2024-07-24 22:15:52.717380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.712 qpair failed and we were unable to recover it. 00:28:13.712 [2024-07-24 22:15:52.717755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.712 [2024-07-24 22:15:52.717795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.712 qpair failed and we were unable to recover it. 00:28:13.712 [2024-07-24 22:15:52.718121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.712 [2024-07-24 22:15:52.718160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.712 qpair failed and we were unable to recover it. 00:28:13.712 [2024-07-24 22:15:52.718471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.712 [2024-07-24 22:15:52.718482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.712 qpair failed and we were unable to recover it. 00:28:13.712 [2024-07-24 22:15:52.718669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.712 [2024-07-24 22:15:52.718681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.712 qpair failed and we were unable to recover it. 00:28:13.712 [2024-07-24 22:15:52.718914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.712 [2024-07-24 22:15:52.718954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.712 qpair failed and we were unable to recover it. 00:28:13.712 [2024-07-24 22:15:52.719327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.712 [2024-07-24 22:15:52.719367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.712 qpair failed and we were unable to recover it. 00:28:13.712 [2024-07-24 22:15:52.719746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.712 [2024-07-24 22:15:52.719789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.712 qpair failed and we were unable to recover it. 00:28:13.712 [2024-07-24 22:15:52.720162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.712 [2024-07-24 22:15:52.720202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.712 qpair failed and we were unable to recover it. 00:28:13.712 [2024-07-24 22:15:52.720553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.712 [2024-07-24 22:15:52.720592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.712 qpair failed and we were unable to recover it. 00:28:13.712 [2024-07-24 22:15:52.720965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.712 [2024-07-24 22:15:52.721007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.712 qpair failed and we were unable to recover it. 00:28:13.712 [2024-07-24 22:15:52.721321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.712 [2024-07-24 22:15:52.721361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.712 qpair failed and we were unable to recover it. 00:28:13.712 [2024-07-24 22:15:52.721726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.712 [2024-07-24 22:15:52.721767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.712 qpair failed and we were unable to recover it. 00:28:13.712 [2024-07-24 22:15:52.722135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.712 [2024-07-24 22:15:52.722175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.712 qpair failed and we were unable to recover it. 00:28:13.712 [2024-07-24 22:15:52.722562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.712 [2024-07-24 22:15:52.722602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.712 qpair failed and we were unable to recover it. 00:28:13.712 [2024-07-24 22:15:52.722978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.712 [2024-07-24 22:15:52.723020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.712 qpair failed and we were unable to recover it. 00:28:13.712 [2024-07-24 22:15:52.723396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.712 [2024-07-24 22:15:52.723436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.712 qpair failed and we were unable to recover it. 00:28:13.713 [2024-07-24 22:15:52.723813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.713 [2024-07-24 22:15:52.723854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.713 qpair failed and we were unable to recover it. 00:28:13.713 [2024-07-24 22:15:52.724153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.713 [2024-07-24 22:15:52.724165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.713 qpair failed and we were unable to recover it. 00:28:13.713 [2024-07-24 22:15:52.724518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.713 [2024-07-24 22:15:52.724558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.713 qpair failed and we were unable to recover it. 00:28:13.713 [2024-07-24 22:15:52.724911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.713 [2024-07-24 22:15:52.724962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.713 qpair failed and we were unable to recover it. 00:28:13.713 [2024-07-24 22:15:52.725283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.713 [2024-07-24 22:15:52.725324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.713 qpair failed and we were unable to recover it. 00:28:13.713 [2024-07-24 22:15:52.725623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.713 [2024-07-24 22:15:52.725663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.713 qpair failed and we were unable to recover it. 00:28:13.713 [2024-07-24 22:15:52.725983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.713 [2024-07-24 22:15:52.726025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.713 qpair failed and we were unable to recover it. 00:28:13.713 [2024-07-24 22:15:52.726310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.713 [2024-07-24 22:15:52.726322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.713 qpair failed and we were unable to recover it. 00:28:13.713 [2024-07-24 22:15:52.726548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.713 [2024-07-24 22:15:52.726560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.713 qpair failed and we were unable to recover it. 00:28:13.713 [2024-07-24 22:15:52.726799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.713 [2024-07-24 22:15:52.726812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.713 qpair failed and we were unable to recover it. 00:28:13.713 [2024-07-24 22:15:52.727055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.713 [2024-07-24 22:15:52.727067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.713 qpair failed and we were unable to recover it. 00:28:13.713 [2024-07-24 22:15:52.727326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.713 [2024-07-24 22:15:52.727338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.713 qpair failed and we were unable to recover it. 00:28:13.713 [2024-07-24 22:15:52.727588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.713 [2024-07-24 22:15:52.727600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.713 qpair failed and we were unable to recover it. 00:28:13.713 [2024-07-24 22:15:52.727906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.713 [2024-07-24 22:15:52.727947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.713 qpair failed and we were unable to recover it. 00:28:13.713 [2024-07-24 22:15:52.728312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.713 [2024-07-24 22:15:52.728352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.713 qpair failed and we were unable to recover it. 00:28:13.713 [2024-07-24 22:15:52.728736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.713 [2024-07-24 22:15:52.728777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.713 qpair failed and we were unable to recover it. 00:28:13.713 [2024-07-24 22:15:52.729134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.713 [2024-07-24 22:15:52.729174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.713 qpair failed and we were unable to recover it. 00:28:13.713 [2024-07-24 22:15:52.729554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.713 [2024-07-24 22:15:52.729593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.713 qpair failed and we were unable to recover it. 00:28:13.713 [2024-07-24 22:15:52.729950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.713 [2024-07-24 22:15:52.729992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.713 qpair failed and we were unable to recover it. 00:28:13.713 [2024-07-24 22:15:52.730350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.713 [2024-07-24 22:15:52.730364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.713 qpair failed and we were unable to recover it. 00:28:13.713 [2024-07-24 22:15:52.730601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.713 [2024-07-24 22:15:52.730612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.713 qpair failed and we were unable to recover it. 00:28:13.713 [2024-07-24 22:15:52.730832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.713 [2024-07-24 22:15:52.730844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.713 qpair failed and we were unable to recover it. 00:28:13.713 [2024-07-24 22:15:52.731069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.713 [2024-07-24 22:15:52.731081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.713 qpair failed and we were unable to recover it. 00:28:13.713 [2024-07-24 22:15:52.731339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.713 [2024-07-24 22:15:52.731351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.713 qpair failed and we were unable to recover it. 00:28:13.713 [2024-07-24 22:15:52.731619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.713 [2024-07-24 22:15:52.731631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.713 qpair failed and we were unable to recover it. 00:28:13.713 [2024-07-24 22:15:52.731877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.713 [2024-07-24 22:15:52.731918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.713 qpair failed and we were unable to recover it. 00:28:13.713 [2024-07-24 22:15:52.732284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.713 [2024-07-24 22:15:52.732325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.713 qpair failed and we were unable to recover it. 00:28:13.713 [2024-07-24 22:15:52.732608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.713 [2024-07-24 22:15:52.732649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.713 qpair failed and we were unable to recover it. 00:28:13.713 [2024-07-24 22:15:52.733033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.713 [2024-07-24 22:15:52.733073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.713 qpair failed and we were unable to recover it. 00:28:13.713 [2024-07-24 22:15:52.733394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.713 [2024-07-24 22:15:52.733406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.713 qpair failed and we were unable to recover it. 00:28:13.714 [2024-07-24 22:15:52.733734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.714 [2024-07-24 22:15:52.733775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.714 qpair failed and we were unable to recover it. 00:28:13.714 [2024-07-24 22:15:52.734150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.714 [2024-07-24 22:15:52.734190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.714 qpair failed and we were unable to recover it. 00:28:13.714 [2024-07-24 22:15:52.734476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.714 [2024-07-24 22:15:52.734488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.714 qpair failed and we were unable to recover it. 00:28:13.714 [2024-07-24 22:15:52.734882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.714 [2024-07-24 22:15:52.734923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.714 qpair failed and we were unable to recover it. 00:28:13.714 [2024-07-24 22:15:52.735309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.714 [2024-07-24 22:15:52.735349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.714 qpair failed and we were unable to recover it. 00:28:13.714 [2024-07-24 22:15:52.735735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.714 [2024-07-24 22:15:52.735777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.714 qpair failed and we were unable to recover it. 00:28:13.714 [2024-07-24 22:15:52.736152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.714 [2024-07-24 22:15:52.736164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.714 qpair failed and we were unable to recover it. 00:28:13.714 [2024-07-24 22:15:52.736479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.714 [2024-07-24 22:15:52.736490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.714 qpair failed and we were unable to recover it. 00:28:13.714 [2024-07-24 22:15:52.736757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.714 [2024-07-24 22:15:52.736769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.714 qpair failed and we were unable to recover it. 00:28:13.714 [2024-07-24 22:15:52.737113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.714 [2024-07-24 22:15:52.737153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.714 qpair failed and we were unable to recover it. 00:28:13.714 [2024-07-24 22:15:52.737530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.714 [2024-07-24 22:15:52.737570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.714 qpair failed and we were unable to recover it. 00:28:13.714 [2024-07-24 22:15:52.737950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.714 [2024-07-24 22:15:52.737990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.714 qpair failed and we were unable to recover it. 00:28:13.714 [2024-07-24 22:15:52.738309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.714 [2024-07-24 22:15:52.738349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.714 qpair failed and we were unable to recover it. 00:28:13.714 [2024-07-24 22:15:52.738706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.714 [2024-07-24 22:15:52.738771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.714 qpair failed and we were unable to recover it. 00:28:13.714 [2024-07-24 22:15:52.739131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.714 [2024-07-24 22:15:52.739171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.714 qpair failed and we were unable to recover it. 00:28:13.714 [2024-07-24 22:15:52.739369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.714 [2024-07-24 22:15:52.739381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.714 qpair failed and we were unable to recover it. 00:28:13.714 [2024-07-24 22:15:52.739705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.714 [2024-07-24 22:15:52.739767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.714 qpair failed and we were unable to recover it. 00:28:13.714 [2024-07-24 22:15:52.740087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.714 [2024-07-24 22:15:52.740127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.714 qpair failed and we were unable to recover it. 00:28:13.714 [2024-07-24 22:15:52.740504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.714 [2024-07-24 22:15:52.740544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.714 qpair failed and we were unable to recover it. 00:28:13.714 [2024-07-24 22:15:52.740918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.714 [2024-07-24 22:15:52.740960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.714 qpair failed and we were unable to recover it. 00:28:13.714 [2024-07-24 22:15:52.741342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.714 [2024-07-24 22:15:52.741382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.714 qpair failed and we were unable to recover it. 00:28:13.714 [2024-07-24 22:15:52.741737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.714 [2024-07-24 22:15:52.741777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.714 qpair failed and we were unable to recover it. 00:28:13.714 [2024-07-24 22:15:52.742170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.714 [2024-07-24 22:15:52.742210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.714 qpair failed and we were unable to recover it. 00:28:13.714 [2024-07-24 22:15:52.742586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.714 [2024-07-24 22:15:52.742625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.714 qpair failed and we were unable to recover it. 00:28:13.714 [2024-07-24 22:15:52.743008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.714 [2024-07-24 22:15:52.743054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.714 qpair failed and we were unable to recover it. 00:28:13.714 [2024-07-24 22:15:52.743379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.714 [2024-07-24 22:15:52.743418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.714 qpair failed and we were unable to recover it. 00:28:13.714 [2024-07-24 22:15:52.743797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.714 [2024-07-24 22:15:52.743837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.714 qpair failed and we were unable to recover it. 00:28:13.714 [2024-07-24 22:15:52.744230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.714 [2024-07-24 22:15:52.744270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.714 qpair failed and we were unable to recover it. 00:28:13.714 [2024-07-24 22:15:52.744649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.714 [2024-07-24 22:15:52.744689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.714 qpair failed and we were unable to recover it. 00:28:13.714 [2024-07-24 22:15:52.745080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.714 [2024-07-24 22:15:52.745121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.714 qpair failed and we were unable to recover it. 00:28:13.714 [2024-07-24 22:15:52.745527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.714 [2024-07-24 22:15:52.745568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.714 qpair failed and we were unable to recover it. 00:28:13.714 [2024-07-24 22:15:52.745948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.714 [2024-07-24 22:15:52.745990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.714 qpair failed and we were unable to recover it. 00:28:13.714 [2024-07-24 22:15:52.746373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.714 [2024-07-24 22:15:52.746413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.714 qpair failed and we were unable to recover it. 00:28:13.714 [2024-07-24 22:15:52.746802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.714 [2024-07-24 22:15:52.746842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.714 qpair failed and we were unable to recover it. 00:28:13.714 [2024-07-24 22:15:52.747219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.714 [2024-07-24 22:15:52.747258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.714 qpair failed and we were unable to recover it. 00:28:13.714 [2024-07-24 22:15:52.747667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-07-24 22:15:52.747679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-07-24 22:15:52.747929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-07-24 22:15:52.747971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-07-24 22:15:52.748352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-07-24 22:15:52.748392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-07-24 22:15:52.748770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-07-24 22:15:52.748783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-07-24 22:15:52.749108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-07-24 22:15:52.749148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-07-24 22:15:52.749397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-07-24 22:15:52.749437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-07-24 22:15:52.749737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-07-24 22:15:52.749778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-07-24 22:15:52.750138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-07-24 22:15:52.750178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-07-24 22:15:52.750562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-07-24 22:15:52.750602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-07-24 22:15:52.750991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-07-24 22:15:52.751033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-07-24 22:15:52.751405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-07-24 22:15:52.751417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-07-24 22:15:52.751749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-07-24 22:15:52.751790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-07-24 22:15:52.752167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-07-24 22:15:52.752207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-07-24 22:15:52.752500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-07-24 22:15:52.752512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-07-24 22:15:52.752857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-07-24 22:15:52.752898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-07-24 22:15:52.753276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-07-24 22:15:52.753315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-07-24 22:15:52.753705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-07-24 22:15:52.753758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-07-24 22:15:52.754119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-07-24 22:15:52.754159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-07-24 22:15:52.754547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-07-24 22:15:52.754587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-07-24 22:15:52.754892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-07-24 22:15:52.754934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-07-24 22:15:52.755327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-07-24 22:15:52.755368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-07-24 22:15:52.755756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-07-24 22:15:52.755803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-07-24 22:15:52.756045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-07-24 22:15:52.756085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-07-24 22:15:52.756384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-07-24 22:15:52.756424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-07-24 22:15:52.756763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-07-24 22:15:52.756804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-07-24 22:15:52.757213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-07-24 22:15:52.757254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-07-24 22:15:52.757636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-07-24 22:15:52.757675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-07-24 22:15:52.758067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-07-24 22:15:52.758107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-07-24 22:15:52.758467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-07-24 22:15:52.758506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-07-24 22:15:52.758932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-07-24 22:15:52.758981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-07-24 22:15:52.759292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-07-24 22:15:52.759332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-07-24 22:15:52.759712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-07-24 22:15:52.759779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-07-24 22:15:52.760159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-07-24 22:15:52.760199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-07-24 22:15:52.760547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-07-24 22:15:52.760587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-07-24 22:15:52.760991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-07-24 22:15:52.761032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-07-24 22:15:52.761425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-07-24 22:15:52.761465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.716 [2024-07-24 22:15:52.761849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-07-24 22:15:52.761890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-07-24 22:15:52.762283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-07-24 22:15:52.762324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-07-24 22:15:52.762608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-07-24 22:15:52.762647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-07-24 22:15:52.763041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-07-24 22:15:52.763082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-07-24 22:15:52.763457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-07-24 22:15:52.763470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-07-24 22:15:52.763774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-07-24 22:15:52.763786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-07-24 22:15:52.764114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-07-24 22:15:52.764154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-07-24 22:15:52.764538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-07-24 22:15:52.764578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-07-24 22:15:52.764958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-07-24 22:15:52.764999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-07-24 22:15:52.765311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-07-24 22:15:52.765352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-07-24 22:15:52.765746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-07-24 22:15:52.765787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-07-24 22:15:52.766092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-07-24 22:15:52.766132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-07-24 22:15:52.766423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-07-24 22:15:52.766435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-07-24 22:15:52.766757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-07-24 22:15:52.766799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-07-24 22:15:52.767183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-07-24 22:15:52.767222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-07-24 22:15:52.767617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-07-24 22:15:52.767657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-07-24 22:15:52.768038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-07-24 22:15:52.768080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-07-24 22:15:52.768372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-07-24 22:15:52.768384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-07-24 22:15:52.768606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-07-24 22:15:52.768618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-07-24 22:15:52.768925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-07-24 22:15:52.768938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-07-24 22:15:52.769162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-07-24 22:15:52.769175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-07-24 22:15:52.769485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-07-24 22:15:52.769526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-07-24 22:15:52.769908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-07-24 22:15:52.769949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-07-24 22:15:52.770333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-07-24 22:15:52.770372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-07-24 22:15:52.770675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-07-24 22:15:52.770724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-07-24 22:15:52.771119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-07-24 22:15:52.771134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-07-24 22:15:52.771393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-07-24 22:15:52.771433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-07-24 22:15:52.771752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-07-24 22:15:52.771793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-07-24 22:15:52.772109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-07-24 22:15:52.772149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-07-24 22:15:52.772538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-07-24 22:15:52.772579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-07-24 22:15:52.772966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-07-24 22:15:52.773007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-07-24 22:15:52.773317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-07-24 22:15:52.773358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-07-24 22:15:52.773747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-07-24 22:15:52.773787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-07-24 22:15:52.774106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-07-24 22:15:52.774147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.717 [2024-07-24 22:15:52.774532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-07-24 22:15:52.774572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-07-24 22:15:52.774954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-07-24 22:15:52.774992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-07-24 22:15:52.775322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-07-24 22:15:52.775362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-07-24 22:15:52.775754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-07-24 22:15:52.775796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-07-24 22:15:52.776180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-07-24 22:15:52.776221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-07-24 22:15:52.776545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-07-24 22:15:52.776584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-07-24 22:15:52.776983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-07-24 22:15:52.777024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-07-24 22:15:52.777412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-07-24 22:15:52.777453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-07-24 22:15:52.777838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-07-24 22:15:52.777879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-07-24 22:15:52.778266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-07-24 22:15:52.778306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-07-24 22:15:52.778692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-07-24 22:15:52.778743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-07-24 22:15:52.779078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-07-24 22:15:52.779119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-07-24 22:15:52.779502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-07-24 22:15:52.779541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-07-24 22:15:52.779860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-07-24 22:15:52.779901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-07-24 22:15:52.780253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-07-24 22:15:52.780293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-07-24 22:15:52.780613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-07-24 22:15:52.780624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-07-24 22:15:52.780919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-07-24 22:15:52.780962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-07-24 22:15:52.781345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-07-24 22:15:52.781385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-07-24 22:15:52.781694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-07-24 22:15:52.781707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-07-24 22:15:52.781984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-07-24 22:15:52.781997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-07-24 22:15:52.782340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-07-24 22:15:52.782380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-07-24 22:15:52.782764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-07-24 22:15:52.782805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-07-24 22:15:52.783142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-07-24 22:15:52.783182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-07-24 22:15:52.783552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-07-24 22:15:52.783564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-07-24 22:15:52.783811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-07-24 22:15:52.783824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-07-24 22:15:52.784153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-07-24 22:15:52.784193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-07-24 22:15:52.784486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-07-24 22:15:52.784526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-07-24 22:15:52.784915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-07-24 22:15:52.784956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-07-24 22:15:52.785321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-07-24 22:15:52.785361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-07-24 22:15:52.785723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-07-24 22:15:52.785771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-07-24 22:15:52.786076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-07-24 22:15:52.786117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-07-24 22:15:52.786436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-07-24 22:15:52.786483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-07-24 22:15:52.786865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-07-24 22:15:52.786906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-07-24 22:15:52.787261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-07-24 22:15:52.787273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-07-24 22:15:52.787606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-07-24 22:15:52.787618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-07-24 22:15:52.787941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-07-24 22:15:52.787955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-07-24 22:15:52.788269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-07-24 22:15:52.788310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-07-24 22:15:52.788687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-07-24 22:15:52.788740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-07-24 22:15:52.789102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-07-24 22:15:52.789142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-07-24 22:15:52.789533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-07-24 22:15:52.789573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-07-24 22:15:52.789936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-07-24 22:15:52.789977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-07-24 22:15:52.790295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-07-24 22:15:52.790334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-07-24 22:15:52.790700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-07-24 22:15:52.790752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-07-24 22:15:52.791061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-07-24 22:15:52.791103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-07-24 22:15:52.791490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-07-24 22:15:52.791531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-07-24 22:15:52.791797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-07-24 22:15:52.791838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-07-24 22:15:52.792233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-07-24 22:15:52.792273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-07-24 22:15:52.792655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-07-24 22:15:52.792696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-07-24 22:15:52.793088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-07-24 22:15:52.793128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-07-24 22:15:52.793511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-07-24 22:15:52.793552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-07-24 22:15:52.793869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-07-24 22:15:52.793911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-07-24 22:15:52.794302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-07-24 22:15:52.794342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-07-24 22:15:52.794675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-07-24 22:15:52.794723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-07-24 22:15:52.795028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-07-24 22:15:52.795069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-07-24 22:15:52.795446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-07-24 22:15:52.795496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-07-24 22:15:52.795887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-07-24 22:15:52.795928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-07-24 22:15:52.796319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-07-24 22:15:52.796359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-07-24 22:15:52.796745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-07-24 22:15:52.796786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-07-24 22:15:52.797125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-07-24 22:15:52.797169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-07-24 22:15:52.797424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-07-24 22:15:52.797437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-07-24 22:15:52.797776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-07-24 22:15:52.797816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-07-24 22:15:52.798156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-07-24 22:15:52.798195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-07-24 22:15:52.798505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-07-24 22:15:52.798544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-07-24 22:15:52.798876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-07-24 22:15:52.798919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-07-24 22:15:52.799227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-07-24 22:15:52.799241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-07-24 22:15:52.799550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-07-24 22:15:52.799565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-07-24 22:15:52.799758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-07-24 22:15:52.799773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-07-24 22:15:52.800081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-07-24 22:15:52.800097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-07-24 22:15:52.800262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-07-24 22:15:52.800274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-07-24 22:15:52.800473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-07-24 22:15:52.800486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-07-24 22:15:52.800678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-07-24 22:15:52.800690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-07-24 22:15:52.801011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-07-24 22:15:52.801026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-07-24 22:15:52.801346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-07-24 22:15:52.801386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-07-24 22:15:52.801670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-07-24 22:15:52.801710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-07-24 22:15:52.802058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-07-24 22:15:52.802098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-07-24 22:15:52.802494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-07-24 22:15:52.802534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-07-24 22:15:52.802943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-07-24 22:15:52.802988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-07-24 22:15:52.803255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-07-24 22:15:52.803295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-07-24 22:15:52.803673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-07-24 22:15:52.803724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-07-24 22:15:52.804059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-07-24 22:15:52.804100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-07-24 22:15:52.804379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-07-24 22:15:52.804418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-07-24 22:15:52.804772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-07-24 22:15:52.804813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-07-24 22:15:52.805180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-07-24 22:15:52.805220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-07-24 22:15:52.805545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-07-24 22:15:52.805584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-07-24 22:15:52.805969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-07-24 22:15:52.806010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-07-24 22:15:52.806269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-07-24 22:15:52.806310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-07-24 22:15:52.806635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-07-24 22:15:52.806675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-07-24 22:15:52.807081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-07-24 22:15:52.807122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-07-24 22:15:52.807447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-07-24 22:15:52.807459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-07-24 22:15:52.807699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-07-24 22:15:52.807711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-07-24 22:15:52.807953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-07-24 22:15:52.807965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-07-24 22:15:52.808133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-07-24 22:15:52.808173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-07-24 22:15:52.808559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-07-24 22:15:52.808598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-07-24 22:15:52.808984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-07-24 22:15:52.809025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-07-24 22:15:52.809364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-07-24 22:15:52.809404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-07-24 22:15:52.809793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-07-24 22:15:52.809834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-07-24 22:15:52.810215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-07-24 22:15:52.810255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-07-24 22:15:52.810497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-07-24 22:15:52.810537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-07-24 22:15:52.810849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-07-24 22:15:52.810889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-07-24 22:15:52.811184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-07-24 22:15:52.811225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-07-24 22:15:52.811635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-07-24 22:15:52.811675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-07-24 22:15:52.812047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-07-24 22:15:52.812089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-07-24 22:15:52.812404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-07-24 22:15:52.812417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-07-24 22:15:52.812754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-07-24 22:15:52.812795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-07-24 22:15:52.813178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-07-24 22:15:52.813219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-07-24 22:15:52.813574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-07-24 22:15:52.813615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-07-24 22:15:52.814016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-07-24 22:15:52.814057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-07-24 22:15:52.814297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-07-24 22:15:52.814337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-07-24 22:15:52.814706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-07-24 22:15:52.814757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-07-24 22:15:52.815148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-07-24 22:15:52.815188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-07-24 22:15:52.815594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-07-24 22:15:52.815634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-07-24 22:15:52.816028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-07-24 22:15:52.816076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-07-24 22:15:52.816410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-07-24 22:15:52.816423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-07-24 22:15:52.816792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-07-24 22:15:52.816833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-07-24 22:15:52.817220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-07-24 22:15:52.817260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-07-24 22:15:52.817645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-07-24 22:15:52.817658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-07-24 22:15:52.817887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-07-24 22:15:52.817928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-07-24 22:15:52.818273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-07-24 22:15:52.818313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-07-24 22:15:52.818653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-07-24 22:15:52.818692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-07-24 22:15:52.819085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-07-24 22:15:52.819125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-07-24 22:15:52.819512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-07-24 22:15:52.819551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-07-24 22:15:52.819873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-07-24 22:15:52.819914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-07-24 22:15:52.820300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-07-24 22:15:52.820340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-07-24 22:15:52.820733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-07-24 22:15:52.820775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-07-24 22:15:52.821143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-07-24 22:15:52.821183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-07-24 22:15:52.821502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-07-24 22:15:52.821542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-07-24 22:15:52.821849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-07-24 22:15:52.821890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-07-24 22:15:52.822184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-07-24 22:15:52.822225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-07-24 22:15:52.822539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-07-24 22:15:52.822579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-07-24 22:15:52.822955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-07-24 22:15:52.822996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-07-24 22:15:52.823389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-07-24 22:15:52.823429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-07-24 22:15:52.823824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-07-24 22:15:52.823867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-07-24 22:15:52.824257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-07-24 22:15:52.824296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-07-24 22:15:52.824577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-07-24 22:15:52.824616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-07-24 22:15:52.824928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-07-24 22:15:52.824970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-07-24 22:15:52.825311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-07-24 22:15:52.825352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-07-24 22:15:52.825760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-07-24 22:15:52.825801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-07-24 22:15:52.826134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-07-24 22:15:52.826173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-07-24 22:15:52.826485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-07-24 22:15:52.826498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-07-24 22:15:52.826741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-07-24 22:15:52.826754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-07-24 22:15:52.827010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-07-24 22:15:52.827023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-07-24 22:15:52.827274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-07-24 22:15:52.827287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-07-24 22:15:52.827607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-07-24 22:15:52.827649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-07-24 22:15:52.827971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-07-24 22:15:52.828012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-07-24 22:15:52.828258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-07-24 22:15:52.828272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-07-24 22:15:52.828603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-07-24 22:15:52.828642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-07-24 22:15:52.828963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-07-24 22:15:52.829004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-07-24 22:15:52.829343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-07-24 22:15:52.829385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-07-24 22:15:52.829691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-07-24 22:15:52.829747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-07-24 22:15:52.830106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-07-24 22:15:52.830146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-07-24 22:15:52.830437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-07-24 22:15:52.830477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-07-24 22:15:52.830814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-07-24 22:15:52.830861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-07-24 22:15:52.831172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-07-24 22:15:52.831212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-07-24 22:15:52.831470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-07-24 22:15:52.831483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-07-24 22:15:52.831710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-07-24 22:15:52.831727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-07-24 22:15:52.832049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-07-24 22:15:52.832062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-07-24 22:15:52.832230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-07-24 22:15:52.832242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-07-24 22:15:52.832495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-07-24 22:15:52.832508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-07-24 22:15:52.832763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-07-24 22:15:52.832777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-07-24 22:15:52.833027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-07-24 22:15:52.833040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-07-24 22:15:52.833348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-07-24 22:15:52.833388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-07-24 22:15:52.833704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-07-24 22:15:52.833755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-07-24 22:15:52.834137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-07-24 22:15:52.834177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-07-24 22:15:52.834587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-07-24 22:15:52.834627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-07-24 22:15:52.835007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-07-24 22:15:52.835049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-07-24 22:15:52.835440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-07-24 22:15:52.835480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-07-24 22:15:52.835807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-07-24 22:15:52.835848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-07-24 22:15:52.836166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-07-24 22:15:52.836206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-07-24 22:15:52.836557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-07-24 22:15:52.836597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-07-24 22:15:52.836974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-07-24 22:15:52.837015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.722 [2024-07-24 22:15:52.837330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-07-24 22:15:52.837370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-07-24 22:15:52.837759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-07-24 22:15:52.837800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-07-24 22:15:52.838185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-07-24 22:15:52.838226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-07-24 22:15:52.838597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-07-24 22:15:52.838638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-07-24 22:15:52.839031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-07-24 22:15:52.839073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-07-24 22:15:52.839440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-07-24 22:15:52.839481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-07-24 22:15:52.839843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-07-24 22:15:52.839883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-07-24 22:15:52.840241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-07-24 22:15:52.840281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-07-24 22:15:52.840609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-07-24 22:15:52.840649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-07-24 22:15:52.840977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-07-24 22:15:52.841018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-07-24 22:15:52.841344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-07-24 22:15:52.841356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-07-24 22:15:52.841669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-07-24 22:15:52.841708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-07-24 22:15:52.842026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-07-24 22:15:52.842066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-07-24 22:15:52.842405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-07-24 22:15:52.842417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-07-24 22:15:52.842666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-07-24 22:15:52.842679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-07-24 22:15:52.842985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-07-24 22:15:52.842998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-07-24 22:15:52.843328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-07-24 22:15:52.843368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-07-24 22:15:52.843759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-07-24 22:15:52.843801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-07-24 22:15:52.844116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-07-24 22:15:52.844164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-07-24 22:15:52.844487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-07-24 22:15:52.844528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-07-24 22:15:52.844864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-07-24 22:15:52.844905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-07-24 22:15:52.845210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-07-24 22:15:52.845256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-07-24 22:15:52.845689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-07-24 22:15:52.845742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-07-24 22:15:52.846072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-07-24 22:15:52.846112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-07-24 22:15:52.846501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-07-24 22:15:52.846541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-07-24 22:15:52.846900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-07-24 22:15:52.846913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-07-24 22:15:52.847167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-07-24 22:15:52.847180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-07-24 22:15:52.847496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-07-24 22:15:52.847536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-07-24 22:15:52.847905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-07-24 22:15:52.847946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-07-24 22:15:52.848330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-07-24 22:15:52.848370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-07-24 22:15:52.848744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-07-24 22:15:52.848786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-07-24 22:15:52.849097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-07-24 22:15:52.849137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-07-24 22:15:52.849522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-07-24 22:15:52.849562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-07-24 22:15:52.849954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.723 [2024-07-24 22:15:52.849967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.723 qpair failed and we were unable to recover it. 00:28:13.723 [2024-07-24 22:15:52.850294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.723 [2024-07-24 22:15:52.850334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.723 qpair failed and we were unable to recover it. 00:28:13.723 [2024-07-24 22:15:52.850706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.723 [2024-07-24 22:15:52.850756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.723 qpair failed and we were unable to recover it. 00:28:13.723 [2024-07-24 22:15:52.851042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.723 [2024-07-24 22:15:52.851082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.723 qpair failed and we were unable to recover it. 00:28:13.723 [2024-07-24 22:15:52.851446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.723 [2024-07-24 22:15:52.851487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.723 qpair failed and we were unable to recover it. 00:28:13.723 [2024-07-24 22:15:52.851874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.723 [2024-07-24 22:15:52.851915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.723 qpair failed and we were unable to recover it. 00:28:13.723 [2024-07-24 22:15:52.852276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.723 [2024-07-24 22:15:52.852317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.723 qpair failed and we were unable to recover it. 00:28:13.723 [2024-07-24 22:15:52.852685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.723 [2024-07-24 22:15:52.852737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.723 qpair failed and we were unable to recover it. 00:28:13.723 [2024-07-24 22:15:52.852998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.723 [2024-07-24 22:15:52.853039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.723 qpair failed and we were unable to recover it. 00:28:13.723 [2024-07-24 22:15:52.853415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.723 [2024-07-24 22:15:52.853455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.723 qpair failed and we were unable to recover it. 00:28:13.723 [2024-07-24 22:15:52.853765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.723 [2024-07-24 22:15:52.853807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.723 qpair failed and we were unable to recover it. 00:28:13.723 [2024-07-24 22:15:52.854138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.723 [2024-07-24 22:15:52.854180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.723 qpair failed and we were unable to recover it. 00:28:13.723 [2024-07-24 22:15:52.854596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.723 [2024-07-24 22:15:52.854636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.723 qpair failed and we were unable to recover it. 00:28:13.723 [2024-07-24 22:15:52.855035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.723 [2024-07-24 22:15:52.855077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.723 qpair failed and we were unable to recover it. 00:28:13.723 [2024-07-24 22:15:52.855487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.723 [2024-07-24 22:15:52.855526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.723 qpair failed and we were unable to recover it. 00:28:13.723 [2024-07-24 22:15:52.855817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.723 [2024-07-24 22:15:52.855858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.723 qpair failed and we were unable to recover it. 00:28:13.723 [2024-07-24 22:15:52.856192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.723 [2024-07-24 22:15:52.856233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.723 qpair failed and we were unable to recover it. 00:28:13.723 [2024-07-24 22:15:52.856618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.723 [2024-07-24 22:15:52.856658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.723 qpair failed and we were unable to recover it. 00:28:13.723 [2024-07-24 22:15:52.857072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.723 [2024-07-24 22:15:52.857114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.723 qpair failed and we were unable to recover it. 00:28:13.723 [2024-07-24 22:15:52.857501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.723 [2024-07-24 22:15:52.857541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.723 qpair failed and we were unable to recover it. 00:28:13.723 [2024-07-24 22:15:52.857819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.723 [2024-07-24 22:15:52.857833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.723 qpair failed and we were unable to recover it. 00:28:13.723 [2024-07-24 22:15:52.858154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.723 [2024-07-24 22:15:52.858167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.723 qpair failed and we were unable to recover it. 00:28:13.723 [2024-07-24 22:15:52.858494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.723 [2024-07-24 22:15:52.858507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.723 qpair failed and we were unable to recover it. 00:28:13.723 [2024-07-24 22:15:52.858867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.723 [2024-07-24 22:15:52.858881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.723 qpair failed and we were unable to recover it. 00:28:13.723 [2024-07-24 22:15:52.859136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.723 [2024-07-24 22:15:52.859149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.723 qpair failed and we were unable to recover it. 00:28:13.723 [2024-07-24 22:15:52.859424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.723 [2024-07-24 22:15:52.859437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.723 qpair failed and we were unable to recover it. 00:28:13.723 [2024-07-24 22:15:52.859748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.723 [2024-07-24 22:15:52.859762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.723 qpair failed and we were unable to recover it. 00:28:13.723 [2024-07-24 22:15:52.860048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.723 [2024-07-24 22:15:52.860087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.723 qpair failed and we were unable to recover it. 00:28:13.723 [2024-07-24 22:15:52.860496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.723 [2024-07-24 22:15:52.860544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.723 qpair failed and we were unable to recover it. 00:28:13.723 [2024-07-24 22:15:52.860921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.724 [2024-07-24 22:15:52.860934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.724 qpair failed and we were unable to recover it. 00:28:13.724 [2024-07-24 22:15:52.861177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.724 [2024-07-24 22:15:52.861190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.724 qpair failed and we were unable to recover it. 00:28:13.724 [2024-07-24 22:15:52.861554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.724 [2024-07-24 22:15:52.861594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.724 qpair failed and we were unable to recover it. 00:28:13.724 [2024-07-24 22:15:52.861989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.724 [2024-07-24 22:15:52.862003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.724 qpair failed and we were unable to recover it. 00:28:13.724 [2024-07-24 22:15:52.862255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.724 [2024-07-24 22:15:52.862280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.724 qpair failed and we were unable to recover it. 00:28:13.724 [2024-07-24 22:15:52.862478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.724 [2024-07-24 22:15:52.862492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.724 qpair failed and we were unable to recover it. 00:28:13.724 [2024-07-24 22:15:52.862763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.724 [2024-07-24 22:15:52.862777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.724 qpair failed and we were unable to recover it. 00:28:13.724 [2024-07-24 22:15:52.863039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.724 [2024-07-24 22:15:52.863081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.724 qpair failed and we were unable to recover it. 00:28:13.724 [2024-07-24 22:15:52.863486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.724 [2024-07-24 22:15:52.863529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.724 qpair failed and we were unable to recover it. 00:28:13.724 [2024-07-24 22:15:52.863860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.724 [2024-07-24 22:15:52.863875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.724 qpair failed and we were unable to recover it. 00:28:13.724 [2024-07-24 22:15:52.864814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.724 [2024-07-24 22:15:52.864865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.724 qpair failed and we were unable to recover it. 00:28:13.724 [2024-07-24 22:15:52.865214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.724 [2024-07-24 22:15:52.865229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.724 qpair failed and we were unable to recover it. 00:28:13.724 [2024-07-24 22:15:52.865496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.724 [2024-07-24 22:15:52.865509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.724 qpair failed and we were unable to recover it. 00:28:13.724 [2024-07-24 22:15:52.865812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.724 [2024-07-24 22:15:52.865826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.724 qpair failed and we were unable to recover it. 00:28:13.724 [2024-07-24 22:15:52.866014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.724 [2024-07-24 22:15:52.866027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.724 qpair failed and we were unable to recover it. 00:28:13.724 [2024-07-24 22:15:52.866352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.724 [2024-07-24 22:15:52.866365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.724 qpair failed and we were unable to recover it. 00:28:13.724 [2024-07-24 22:15:52.866613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.724 [2024-07-24 22:15:52.866626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.724 qpair failed and we were unable to recover it. 00:28:13.724 [2024-07-24 22:15:52.866867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.724 [2024-07-24 22:15:52.866880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.724 qpair failed and we were unable to recover it. 00:28:13.724 [2024-07-24 22:15:52.867203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.724 [2024-07-24 22:15:52.867216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.724 qpair failed and we were unable to recover it. 00:28:13.724 [2024-07-24 22:15:52.867502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.724 [2024-07-24 22:15:52.867515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.724 qpair failed and we were unable to recover it. 00:28:13.724 [2024-07-24 22:15:52.867852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.724 [2024-07-24 22:15:52.867865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.724 qpair failed and we were unable to recover it. 00:28:13.724 [2024-07-24 22:15:52.868144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.724 [2024-07-24 22:15:52.868157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.724 qpair failed and we were unable to recover it. 00:28:13.724 [2024-07-24 22:15:52.868486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.724 [2024-07-24 22:15:52.868499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.724 qpair failed and we were unable to recover it. 00:28:13.724 [2024-07-24 22:15:52.868824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.724 [2024-07-24 22:15:52.868837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.724 qpair failed and we were unable to recover it. 00:28:13.724 [2024-07-24 22:15:52.869174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.724 [2024-07-24 22:15:52.869187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.724 qpair failed and we were unable to recover it. 00:28:13.724 [2024-07-24 22:15:52.869447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.724 [2024-07-24 22:15:52.869460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.724 qpair failed and we were unable to recover it. 00:28:13.724 [2024-07-24 22:15:52.869712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.724 [2024-07-24 22:15:52.869730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.724 qpair failed and we were unable to recover it. 00:28:13.724 [2024-07-24 22:15:52.869995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.724 [2024-07-24 22:15:52.870009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.724 qpair failed and we were unable to recover it. 00:28:13.724 [2024-07-24 22:15:52.870240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.724 [2024-07-24 22:15:52.870254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.724 qpair failed and we were unable to recover it. 00:28:13.724 [2024-07-24 22:15:52.870446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.724 [2024-07-24 22:15:52.870459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.724 qpair failed and we were unable to recover it. 00:28:13.724 [2024-07-24 22:15:52.870659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.724 [2024-07-24 22:15:52.870672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.724 qpair failed and we were unable to recover it. 00:28:13.724 [2024-07-24 22:15:52.871003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.725 [2024-07-24 22:15:52.871017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.725 qpair failed and we were unable to recover it. 00:28:13.725 [2024-07-24 22:15:52.871201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.725 [2024-07-24 22:15:52.871214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.725 qpair failed and we were unable to recover it. 00:28:13.725 [2024-07-24 22:15:52.871397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.725 [2024-07-24 22:15:52.871410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.725 qpair failed and we were unable to recover it. 00:28:13.725 [2024-07-24 22:15:52.871589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.725 [2024-07-24 22:15:52.871602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.725 qpair failed and we were unable to recover it. 00:28:13.725 [2024-07-24 22:15:52.871836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.725 [2024-07-24 22:15:52.871850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.725 qpair failed and we were unable to recover it. 00:28:13.725 [2024-07-24 22:15:52.872029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.725 [2024-07-24 22:15:52.872042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.725 qpair failed and we were unable to recover it. 00:28:13.725 [2024-07-24 22:15:52.872342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.725 [2024-07-24 22:15:52.872355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.725 qpair failed and we were unable to recover it. 00:28:13.725 [2024-07-24 22:15:52.872587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.725 [2024-07-24 22:15:52.872600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.725 qpair failed and we were unable to recover it. 00:28:13.725 [2024-07-24 22:15:52.872797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.725 [2024-07-24 22:15:52.872814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.725 qpair failed and we were unable to recover it. 00:28:13.725 [2024-07-24 22:15:52.873045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.725 [2024-07-24 22:15:52.873059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.725 qpair failed and we were unable to recover it. 00:28:13.725 [2024-07-24 22:15:52.873254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.725 [2024-07-24 22:15:52.873267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.725 qpair failed and we were unable to recover it. 00:28:13.725 [2024-07-24 22:15:52.873513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.725 [2024-07-24 22:15:52.873527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.725 qpair failed and we were unable to recover it. 00:28:13.725 [2024-07-24 22:15:52.873687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.725 [2024-07-24 22:15:52.873699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.725 qpair failed and we were unable to recover it. 00:28:13.725 [2024-07-24 22:15:52.873953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.725 [2024-07-24 22:15:52.873966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.725 qpair failed and we were unable to recover it. 00:28:13.725 [2024-07-24 22:15:52.874172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.725 [2024-07-24 22:15:52.874185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.725 qpair failed and we were unable to recover it. 00:28:13.725 [2024-07-24 22:15:52.874509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.725 [2024-07-24 22:15:52.874522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.725 qpair failed and we were unable to recover it. 00:28:13.725 [2024-07-24 22:15:52.874859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.725 [2024-07-24 22:15:52.874873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.725 qpair failed and we were unable to recover it. 00:28:13.725 [2024-07-24 22:15:52.875031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.725 [2024-07-24 22:15:52.875045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.725 qpair failed and we were unable to recover it. 00:28:13.725 [2024-07-24 22:15:52.875363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.725 [2024-07-24 22:15:52.875376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.725 qpair failed and we were unable to recover it. 00:28:13.725 [2024-07-24 22:15:52.875621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.725 [2024-07-24 22:15:52.875635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.725 qpair failed and we were unable to recover it. 00:28:13.725 [2024-07-24 22:15:52.875892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.725 [2024-07-24 22:15:52.875905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.725 qpair failed and we were unable to recover it. 00:28:13.725 [2024-07-24 22:15:52.876171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.725 [2024-07-24 22:15:52.876184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.725 qpair failed and we were unable to recover it. 00:28:13.725 [2024-07-24 22:15:52.876347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.725 [2024-07-24 22:15:52.876361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.725 qpair failed and we were unable to recover it. 00:28:13.725 [2024-07-24 22:15:52.876626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.725 [2024-07-24 22:15:52.876639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.725 qpair failed and we were unable to recover it. 00:28:13.725 [2024-07-24 22:15:52.876893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.725 [2024-07-24 22:15:52.876907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.725 qpair failed and we were unable to recover it. 00:28:13.725 [2024-07-24 22:15:52.877218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.725 [2024-07-24 22:15:52.877231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.725 qpair failed and we were unable to recover it. 00:28:13.725 [2024-07-24 22:15:52.877504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.725 [2024-07-24 22:15:52.877516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.725 qpair failed and we were unable to recover it. 00:28:13.726 [2024-07-24 22:15:52.877746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.726 [2024-07-24 22:15:52.877759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.726 qpair failed and we were unable to recover it. 00:28:13.726 [2024-07-24 22:15:52.878017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.726 [2024-07-24 22:15:52.878030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.726 qpair failed and we were unable to recover it. 00:28:13.726 [2024-07-24 22:15:52.878356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.726 [2024-07-24 22:15:52.878370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.726 qpair failed and we were unable to recover it. 00:28:13.726 [2024-07-24 22:15:52.878524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.726 [2024-07-24 22:15:52.878537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.726 qpair failed and we were unable to recover it. 00:28:13.726 [2024-07-24 22:15:52.878733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.726 [2024-07-24 22:15:52.878746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.726 qpair failed and we were unable to recover it. 00:28:13.726 [2024-07-24 22:15:52.878938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.726 [2024-07-24 22:15:52.878951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.726 qpair failed and we were unable to recover it. 00:28:13.726 [2024-07-24 22:15:52.879181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.726 [2024-07-24 22:15:52.879193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.726 qpair failed and we were unable to recover it. 00:28:13.726 [2024-07-24 22:15:52.879510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.726 [2024-07-24 22:15:52.879524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.726 qpair failed and we were unable to recover it. 00:28:13.726 [2024-07-24 22:15:52.879763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.726 [2024-07-24 22:15:52.879776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.726 qpair failed and we were unable to recover it. 00:28:13.726 [2024-07-24 22:15:52.880080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.726 [2024-07-24 22:15:52.880093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.726 qpair failed and we were unable to recover it. 00:28:13.726 [2024-07-24 22:15:52.880416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.726 [2024-07-24 22:15:52.880429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.726 qpair failed and we were unable to recover it. 00:28:13.726 [2024-07-24 22:15:52.880671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.726 [2024-07-24 22:15:52.880684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.726 qpair failed and we were unable to recover it. 00:28:13.726 [2024-07-24 22:15:52.880934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.726 [2024-07-24 22:15:52.880947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.726 qpair failed and we were unable to recover it. 00:28:13.726 [2024-07-24 22:15:52.881203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.726 [2024-07-24 22:15:52.881215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.726 qpair failed and we were unable to recover it. 00:28:13.726 [2024-07-24 22:15:52.881515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.726 [2024-07-24 22:15:52.881528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.726 qpair failed and we were unable to recover it. 00:28:13.726 [2024-07-24 22:15:52.881685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.726 [2024-07-24 22:15:52.881698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.726 qpair failed and we were unable to recover it. 00:28:13.726 [2024-07-24 22:15:52.881954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.726 [2024-07-24 22:15:52.881967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.726 qpair failed and we were unable to recover it. 00:28:13.726 [2024-07-24 22:15:52.882139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.726 [2024-07-24 22:15:52.882151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.726 qpair failed and we were unable to recover it. 00:28:13.726 [2024-07-24 22:15:52.882332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.726 [2024-07-24 22:15:52.882345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.726 qpair failed and we were unable to recover it. 00:28:13.726 [2024-07-24 22:15:52.882621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.726 [2024-07-24 22:15:52.882634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.726 qpair failed and we were unable to recover it. 00:28:13.726 [2024-07-24 22:15:52.882820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.726 [2024-07-24 22:15:52.882833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.726 qpair failed and we were unable to recover it. 00:28:13.726 [2024-07-24 22:15:52.883077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.726 [2024-07-24 22:15:52.883092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.726 qpair failed and we were unable to recover it. 00:28:13.726 [2024-07-24 22:15:52.883258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.726 [2024-07-24 22:15:52.883271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.726 qpair failed and we were unable to recover it. 00:28:13.726 [2024-07-24 22:15:52.883516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.726 [2024-07-24 22:15:52.883529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.726 qpair failed and we were unable to recover it. 00:28:13.726 [2024-07-24 22:15:52.883755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.726 [2024-07-24 22:15:52.883769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.726 qpair failed and we were unable to recover it. 00:28:13.726 [2024-07-24 22:15:52.884004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.726 [2024-07-24 22:15:52.884018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.726 qpair failed and we were unable to recover it. 00:28:13.726 [2024-07-24 22:15:52.884249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.726 [2024-07-24 22:15:52.884262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.726 qpair failed and we were unable to recover it. 00:28:13.727 [2024-07-24 22:15:52.884514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.727 [2024-07-24 22:15:52.884527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.727 qpair failed and we were unable to recover it. 00:28:13.727 [2024-07-24 22:15:52.884701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.727 [2024-07-24 22:15:52.884721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.727 qpair failed and we were unable to recover it. 00:28:13.727 [2024-07-24 22:15:52.884972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.727 [2024-07-24 22:15:52.884985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.727 qpair failed and we were unable to recover it. 00:28:13.727 [2024-07-24 22:15:52.885178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.727 [2024-07-24 22:15:52.885191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.727 qpair failed and we were unable to recover it. 00:28:13.727 [2024-07-24 22:15:52.885368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.727 [2024-07-24 22:15:52.885381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.727 qpair failed and we were unable to recover it. 00:28:13.727 [2024-07-24 22:15:52.885561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.727 [2024-07-24 22:15:52.885573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.727 qpair failed and we were unable to recover it. 00:28:13.727 [2024-07-24 22:15:52.885933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.727 [2024-07-24 22:15:52.885947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.727 qpair failed and we were unable to recover it. 00:28:13.727 [2024-07-24 22:15:52.886265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.727 [2024-07-24 22:15:52.886279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.727 qpair failed and we were unable to recover it. 00:28:13.727 [2024-07-24 22:15:52.886512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.727 [2024-07-24 22:15:52.886525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.727 qpair failed and we were unable to recover it. 00:28:13.727 [2024-07-24 22:15:52.886782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.727 [2024-07-24 22:15:52.886795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.727 qpair failed and we were unable to recover it. 00:28:13.727 [2024-07-24 22:15:52.886971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.727 [2024-07-24 22:15:52.886984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.727 qpair failed and we were unable to recover it. 00:28:13.727 [2024-07-24 22:15:52.887140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.727 [2024-07-24 22:15:52.887153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.727 qpair failed and we were unable to recover it. 00:28:13.727 [2024-07-24 22:15:52.887309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.727 [2024-07-24 22:15:52.887322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.727 qpair failed and we were unable to recover it. 00:28:13.727 [2024-07-24 22:15:52.887622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.727 [2024-07-24 22:15:52.887635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.727 qpair failed and we were unable to recover it. 00:28:13.727 [2024-07-24 22:15:52.887867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.727 [2024-07-24 22:15:52.887880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.727 qpair failed and we were unable to recover it. 00:28:13.727 [2024-07-24 22:15:52.888130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.727 [2024-07-24 22:15:52.888142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.727 qpair failed and we were unable to recover it. 00:28:13.727 [2024-07-24 22:15:52.888382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.727 [2024-07-24 22:15:52.888395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.727 qpair failed and we were unable to recover it. 00:28:13.727 [2024-07-24 22:15:52.888569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.727 [2024-07-24 22:15:52.888581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.727 qpair failed and we were unable to recover it. 00:28:13.727 [2024-07-24 22:15:52.888779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.727 [2024-07-24 22:15:52.888792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.727 qpair failed and we were unable to recover it. 00:28:13.727 [2024-07-24 22:15:52.889017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.727 [2024-07-24 22:15:52.889030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.727 qpair failed and we were unable to recover it. 00:28:13.727 [2024-07-24 22:15:52.889270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.727 [2024-07-24 22:15:52.889282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.727 qpair failed and we were unable to recover it. 00:28:13.727 [2024-07-24 22:15:52.889512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.727 [2024-07-24 22:15:52.889525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.727 qpair failed and we were unable to recover it. 00:28:13.727 [2024-07-24 22:15:52.889871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.727 [2024-07-24 22:15:52.889884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.727 qpair failed and we were unable to recover it. 00:28:13.727 [2024-07-24 22:15:52.890202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.727 [2024-07-24 22:15:52.890214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.727 qpair failed and we were unable to recover it. 00:28:13.727 [2024-07-24 22:15:52.890497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.727 [2024-07-24 22:15:52.890510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.727 qpair failed and we were unable to recover it. 00:28:13.727 [2024-07-24 22:15:52.890743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.727 [2024-07-24 22:15:52.890756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.727 qpair failed and we were unable to recover it. 00:28:13.727 [2024-07-24 22:15:52.891057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.727 [2024-07-24 22:15:52.891070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.727 qpair failed and we were unable to recover it. 00:28:13.727 [2024-07-24 22:15:52.891311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.727 [2024-07-24 22:15:52.891324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.727 qpair failed and we were unable to recover it. 00:28:13.727 [2024-07-24 22:15:52.891490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.727 [2024-07-24 22:15:52.891502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.727 qpair failed and we were unable to recover it. 00:28:13.727 [2024-07-24 22:15:52.891821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.728 [2024-07-24 22:15:52.891834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.728 qpair failed and we were unable to recover it. 00:28:13.728 [2024-07-24 22:15:52.892060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.728 [2024-07-24 22:15:52.892073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.728 qpair failed and we were unable to recover it. 00:28:13.728 [2024-07-24 22:15:52.892305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.728 [2024-07-24 22:15:52.892317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.728 qpair failed and we were unable to recover it. 00:28:13.728 [2024-07-24 22:15:52.892539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.728 [2024-07-24 22:15:52.892552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.728 qpair failed and we were unable to recover it. 00:28:13.728 [2024-07-24 22:15:52.892718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.728 [2024-07-24 22:15:52.892730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.728 qpair failed and we were unable to recover it. 00:28:13.728 [2024-07-24 22:15:52.892909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.728 [2024-07-24 22:15:52.892924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.728 qpair failed and we were unable to recover it. 00:28:13.728 [2024-07-24 22:15:52.893242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.728 [2024-07-24 22:15:52.893255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.728 qpair failed and we were unable to recover it. 00:28:13.728 [2024-07-24 22:15:52.893429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.728 [2024-07-24 22:15:52.893442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.728 qpair failed and we were unable to recover it. 00:28:13.728 [2024-07-24 22:15:52.893710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.728 [2024-07-24 22:15:52.893728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.728 qpair failed and we were unable to recover it. 00:28:13.728 [2024-07-24 22:15:52.894031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.728 [2024-07-24 22:15:52.894044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.728 qpair failed and we were unable to recover it. 00:28:13.728 [2024-07-24 22:15:52.894343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.728 [2024-07-24 22:15:52.894356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.728 qpair failed and we were unable to recover it. 00:28:13.728 [2024-07-24 22:15:52.894583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.728 [2024-07-24 22:15:52.894597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.728 qpair failed and we were unable to recover it. 00:28:13.728 [2024-07-24 22:15:52.894841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.728 [2024-07-24 22:15:52.894854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.728 qpair failed and we were unable to recover it. 00:28:13.728 [2024-07-24 22:15:52.895097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.728 [2024-07-24 22:15:52.895110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.728 qpair failed and we were unable to recover it. 00:28:13.728 [2024-07-24 22:15:52.895429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.728 [2024-07-24 22:15:52.895441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.728 qpair failed and we were unable to recover it. 00:28:13.728 [2024-07-24 22:15:52.895683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.728 [2024-07-24 22:15:52.895695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.728 qpair failed and we were unable to recover it. 00:28:13.728 [2024-07-24 22:15:52.895869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.728 [2024-07-24 22:15:52.895882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.728 qpair failed and we were unable to recover it. 00:28:13.728 [2024-07-24 22:15:52.896071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.728 [2024-07-24 22:15:52.896084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.728 qpair failed and we were unable to recover it. 00:28:13.728 [2024-07-24 22:15:52.896350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.728 [2024-07-24 22:15:52.896363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.728 qpair failed and we were unable to recover it. 00:28:13.728 [2024-07-24 22:15:52.896624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.728 [2024-07-24 22:15:52.896637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.728 qpair failed and we were unable to recover it. 00:28:13.728 [2024-07-24 22:15:52.896833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.728 [2024-07-24 22:15:52.896846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.728 qpair failed and we were unable to recover it. 00:28:13.728 [2024-07-24 22:15:52.897153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.728 [2024-07-24 22:15:52.897166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.728 qpair failed and we were unable to recover it. 00:28:13.728 [2024-07-24 22:15:52.897342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.728 [2024-07-24 22:15:52.897354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.728 qpair failed and we were unable to recover it. 00:28:13.728 [2024-07-24 22:15:52.897590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.728 [2024-07-24 22:15:52.897604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.728 qpair failed and we were unable to recover it. 00:28:13.728 [2024-07-24 22:15:52.897763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.728 [2024-07-24 22:15:52.897776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.728 qpair failed and we were unable to recover it. 00:28:13.728 [2024-07-24 22:15:52.898089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.728 [2024-07-24 22:15:52.898102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.728 qpair failed and we were unable to recover it. 00:28:13.728 [2024-07-24 22:15:52.898328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.728 [2024-07-24 22:15:52.898340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.728 qpair failed and we were unable to recover it. 00:28:13.728 [2024-07-24 22:15:52.898596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.728 [2024-07-24 22:15:52.898610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.728 qpair failed and we were unable to recover it. 00:28:13.728 [2024-07-24 22:15:52.898909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.728 [2024-07-24 22:15:52.898923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.728 qpair failed and we were unable to recover it. 00:28:13.728 [2024-07-24 22:15:52.899247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.729 [2024-07-24 22:15:52.899260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.729 qpair failed and we were unable to recover it. 00:28:13.729 [2024-07-24 22:15:52.899503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.729 [2024-07-24 22:15:52.899516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.729 qpair failed and we were unable to recover it. 00:28:13.729 [2024-07-24 22:15:52.899771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.729 [2024-07-24 22:15:52.899785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.729 qpair failed and we were unable to recover it. 00:28:13.729 [2024-07-24 22:15:52.900012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.729 [2024-07-24 22:15:52.900024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.729 qpair failed and we were unable to recover it. 00:28:13.729 [2024-07-24 22:15:52.900370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.729 [2024-07-24 22:15:52.900383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.729 qpair failed and we were unable to recover it. 00:28:13.729 [2024-07-24 22:15:52.900594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.729 [2024-07-24 22:15:52.900607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.729 qpair failed and we were unable to recover it. 00:28:13.729 [2024-07-24 22:15:52.900781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.729 [2024-07-24 22:15:52.900794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.729 qpair failed and we were unable to recover it. 00:28:13.729 [2024-07-24 22:15:52.901136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.729 [2024-07-24 22:15:52.901149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.729 qpair failed and we were unable to recover it. 00:28:13.729 [2024-07-24 22:15:52.901416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.729 [2024-07-24 22:15:52.901429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.729 qpair failed and we were unable to recover it. 00:28:13.729 [2024-07-24 22:15:52.901681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.729 [2024-07-24 22:15:52.901693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.729 qpair failed and we were unable to recover it. 00:28:13.729 [2024-07-24 22:15:52.901867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.729 [2024-07-24 22:15:52.901880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.729 qpair failed and we were unable to recover it. 00:28:13.729 [2024-07-24 22:15:52.902052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.729 [2024-07-24 22:15:52.902064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.729 qpair failed and we were unable to recover it. 00:28:13.729 [2024-07-24 22:15:52.902254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.729 [2024-07-24 22:15:52.902266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.729 qpair failed and we were unable to recover it. 00:28:13.729 [2024-07-24 22:15:52.902438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.729 [2024-07-24 22:15:52.902450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.729 qpair failed and we were unable to recover it. 00:28:13.729 [2024-07-24 22:15:52.902682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.729 [2024-07-24 22:15:52.902695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.729 qpair failed and we were unable to recover it. 00:28:13.729 [2024-07-24 22:15:52.902951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.729 [2024-07-24 22:15:52.902964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.729 qpair failed and we were unable to recover it. 00:28:13.729 [2024-07-24 22:15:52.903277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.729 [2024-07-24 22:15:52.903291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.729 qpair failed and we were unable to recover it. 00:28:13.729 [2024-07-24 22:15:52.903468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.729 [2024-07-24 22:15:52.903480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.729 qpair failed and we were unable to recover it. 00:28:13.729 [2024-07-24 22:15:52.903799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.729 [2024-07-24 22:15:52.903812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.729 qpair failed and we were unable to recover it. 00:28:13.729 [2024-07-24 22:15:52.904035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.729 [2024-07-24 22:15:52.904048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.729 qpair failed and we were unable to recover it. 00:28:13.729 [2024-07-24 22:15:52.904345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.729 [2024-07-24 22:15:52.904357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.729 qpair failed and we were unable to recover it. 00:28:13.729 [2024-07-24 22:15:52.904541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.729 [2024-07-24 22:15:52.904553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:13.729 qpair failed and we were unable to recover it. 00:28:14.005 [2024-07-24 22:15:52.904848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.005 [2024-07-24 22:15:52.904861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.005 qpair failed and we were unable to recover it. 00:28:14.005 [2024-07-24 22:15:52.905088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.005 [2024-07-24 22:15:52.905103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.005 qpair failed and we were unable to recover it. 00:28:14.005 [2024-07-24 22:15:52.905287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.005 [2024-07-24 22:15:52.905300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.005 qpair failed and we were unable to recover it. 00:28:14.005 [2024-07-24 22:15:52.905613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.005 [2024-07-24 22:15:52.905626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.005 qpair failed and we were unable to recover it. 00:28:14.005 [2024-07-24 22:15:52.905797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.005 [2024-07-24 22:15:52.905810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.005 qpair failed and we were unable to recover it. 00:28:14.005 [2024-07-24 22:15:52.906105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.005 [2024-07-24 22:15:52.906117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.005 qpair failed and we were unable to recover it. 00:28:14.005 [2024-07-24 22:15:52.906358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.005 [2024-07-24 22:15:52.906372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.005 qpair failed and we were unable to recover it. 00:28:14.005 [2024-07-24 22:15:52.906599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.005 [2024-07-24 22:15:52.906611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.005 qpair failed and we were unable to recover it. 00:28:14.005 [2024-07-24 22:15:52.906742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.005 [2024-07-24 22:15:52.906756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.005 qpair failed and we were unable to recover it. 00:28:14.005 [2024-07-24 22:15:52.906983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.005 [2024-07-24 22:15:52.906996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.005 qpair failed and we were unable to recover it. 00:28:14.005 [2024-07-24 22:15:52.907239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.005 [2024-07-24 22:15:52.907252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.005 qpair failed and we were unable to recover it. 00:28:14.005 [2024-07-24 22:15:52.907508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.005 [2024-07-24 22:15:52.907521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.005 qpair failed and we were unable to recover it. 00:28:14.005 [2024-07-24 22:15:52.907817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.005 [2024-07-24 22:15:52.907831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.005 qpair failed and we were unable to recover it. 00:28:14.005 [2024-07-24 22:15:52.908055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.005 [2024-07-24 22:15:52.908068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.005 qpair failed and we were unable to recover it. 00:28:14.005 [2024-07-24 22:15:52.908237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.005 [2024-07-24 22:15:52.908249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.005 qpair failed and we were unable to recover it. 00:28:14.005 [2024-07-24 22:15:52.908568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.005 [2024-07-24 22:15:52.908583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.005 qpair failed and we were unable to recover it. 00:28:14.005 [2024-07-24 22:15:52.908856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.005 [2024-07-24 22:15:52.908869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.005 qpair failed and we were unable to recover it. 00:28:14.005 [2024-07-24 22:15:52.909024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.005 [2024-07-24 22:15:52.909036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.005 qpair failed and we were unable to recover it. 00:28:14.005 [2024-07-24 22:15:52.909228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.005 [2024-07-24 22:15:52.909241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.005 qpair failed and we were unable to recover it. 00:28:14.005 [2024-07-24 22:15:52.909535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.005 [2024-07-24 22:15:52.909548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.005 qpair failed and we were unable to recover it. 00:28:14.005 [2024-07-24 22:15:52.909802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.005 [2024-07-24 22:15:52.909815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.005 qpair failed and we were unable to recover it. 00:28:14.005 [2024-07-24 22:15:52.909995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.005 [2024-07-24 22:15:52.910008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.005 qpair failed and we were unable to recover it. 00:28:14.005 [2024-07-24 22:15:52.910284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.005 [2024-07-24 22:15:52.910297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.005 qpair failed and we were unable to recover it. 00:28:14.005 [2024-07-24 22:15:52.910470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.005 [2024-07-24 22:15:52.910483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.005 qpair failed and we were unable to recover it. 00:28:14.005 [2024-07-24 22:15:52.910704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.005 [2024-07-24 22:15:52.910722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.005 qpair failed and we were unable to recover it. 00:28:14.005 [2024-07-24 22:15:52.910892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.005 [2024-07-24 22:15:52.910905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.005 qpair failed and we were unable to recover it. 00:28:14.005 [2024-07-24 22:15:52.911082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.005 [2024-07-24 22:15:52.911095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.006 qpair failed and we were unable to recover it. 00:28:14.006 [2024-07-24 22:15:52.911318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.006 [2024-07-24 22:15:52.911331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.006 qpair failed and we were unable to recover it. 00:28:14.006 [2024-07-24 22:15:52.911505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.006 [2024-07-24 22:15:52.911517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.006 qpair failed and we were unable to recover it. 00:28:14.006 [2024-07-24 22:15:52.911754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.006 [2024-07-24 22:15:52.911766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.006 qpair failed and we were unable to recover it. 00:28:14.006 [2024-07-24 22:15:52.912007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.006 [2024-07-24 22:15:52.912019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.006 qpair failed and we were unable to recover it. 00:28:14.006 [2024-07-24 22:15:52.912190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.006 [2024-07-24 22:15:52.912202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.006 qpair failed and we were unable to recover it. 00:28:14.006 [2024-07-24 22:15:52.912495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.006 [2024-07-24 22:15:52.912507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.006 qpair failed and we were unable to recover it. 00:28:14.006 [2024-07-24 22:15:52.912771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.006 [2024-07-24 22:15:52.912784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.006 qpair failed and we were unable to recover it. 00:28:14.006 [2024-07-24 22:15:52.912994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.006 [2024-07-24 22:15:52.913006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.006 qpair failed and we were unable to recover it. 00:28:14.006 [2024-07-24 22:15:52.913155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.006 [2024-07-24 22:15:52.913168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.006 qpair failed and we were unable to recover it. 00:28:14.006 [2024-07-24 22:15:52.913411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.006 [2024-07-24 22:15:52.913423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.006 qpair failed and we were unable to recover it. 00:28:14.006 [2024-07-24 22:15:52.913587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.006 [2024-07-24 22:15:52.913599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.006 qpair failed and we were unable to recover it. 00:28:14.006 [2024-07-24 22:15:52.913862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.006 [2024-07-24 22:15:52.913874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.006 qpair failed and we were unable to recover it. 00:28:14.006 [2024-07-24 22:15:52.914136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.006 [2024-07-24 22:15:52.914149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.006 qpair failed and we were unable to recover it. 00:28:14.006 [2024-07-24 22:15:52.914398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.006 [2024-07-24 22:15:52.914411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.006 qpair failed and we were unable to recover it. 00:28:14.006 [2024-07-24 22:15:52.914656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.006 [2024-07-24 22:15:52.914668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.006 qpair failed and we were unable to recover it. 00:28:14.006 [2024-07-24 22:15:52.914843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.006 [2024-07-24 22:15:52.914856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.006 qpair failed and we were unable to recover it. 00:28:14.006 [2024-07-24 22:15:52.915097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.006 [2024-07-24 22:15:52.915109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.006 qpair failed and we were unable to recover it. 00:28:14.006 [2024-07-24 22:15:52.915338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.006 [2024-07-24 22:15:52.915350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.006 qpair failed and we were unable to recover it. 00:28:14.006 [2024-07-24 22:15:52.915596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.006 [2024-07-24 22:15:52.915608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.006 qpair failed and we were unable to recover it. 00:28:14.006 [2024-07-24 22:15:52.915893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.006 [2024-07-24 22:15:52.915906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.006 qpair failed and we were unable to recover it. 00:28:14.006 [2024-07-24 22:15:52.916148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.006 [2024-07-24 22:15:52.916160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.006 qpair failed and we were unable to recover it. 00:28:14.006 [2024-07-24 22:15:52.916422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.006 [2024-07-24 22:15:52.916434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.006 qpair failed and we were unable to recover it. 00:28:14.006 [2024-07-24 22:15:52.916658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.006 [2024-07-24 22:15:52.916695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.006 qpair failed and we were unable to recover it. 00:28:14.006 [2024-07-24 22:15:52.916951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.006 [2024-07-24 22:15:52.916992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.006 qpair failed and we were unable to recover it. 00:28:14.006 [2024-07-24 22:15:52.917349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.006 [2024-07-24 22:15:52.917389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.006 qpair failed and we were unable to recover it. 00:28:14.006 [2024-07-24 22:15:52.917681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.006 [2024-07-24 22:15:52.917735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.006 qpair failed and we were unable to recover it. 00:28:14.006 [2024-07-24 22:15:52.918174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.006 [2024-07-24 22:15:52.918187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.006 qpair failed and we were unable to recover it. 00:28:14.006 [2024-07-24 22:15:52.918355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.006 [2024-07-24 22:15:52.918367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.006 qpair failed and we were unable to recover it. 00:28:14.006 [2024-07-24 22:15:52.918660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.006 [2024-07-24 22:15:52.918673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.006 qpair failed and we were unable to recover it. 00:28:14.006 [2024-07-24 22:15:52.918852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.006 [2024-07-24 22:15:52.918865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.006 qpair failed and we were unable to recover it. 00:28:14.006 [2024-07-24 22:15:52.919027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.006 [2024-07-24 22:15:52.919040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.006 qpair failed and we were unable to recover it. 00:28:14.006 [2024-07-24 22:15:52.919283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.006 [2024-07-24 22:15:52.919324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.006 qpair failed and we were unable to recover it. 00:28:14.006 [2024-07-24 22:15:52.919486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.006 [2024-07-24 22:15:52.919526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.006 qpair failed and we were unable to recover it. 00:28:14.006 [2024-07-24 22:15:52.919765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.006 [2024-07-24 22:15:52.919806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.006 qpair failed and we were unable to recover it. 00:28:14.006 [2024-07-24 22:15:52.920090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.006 [2024-07-24 22:15:52.920135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.006 qpair failed and we were unable to recover it. 00:28:14.006 [2024-07-24 22:15:52.920358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.006 [2024-07-24 22:15:52.920401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.006 qpair failed and we were unable to recover it. 00:28:14.007 [2024-07-24 22:15:52.920725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.007 [2024-07-24 22:15:52.920737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.007 qpair failed and we were unable to recover it. 00:28:14.007 [2024-07-24 22:15:52.920994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.007 [2024-07-24 22:15:52.921006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.007 qpair failed and we were unable to recover it. 00:28:14.007 [2024-07-24 22:15:52.921257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.007 [2024-07-24 22:15:52.921298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.007 qpair failed and we were unable to recover it. 00:28:14.007 [2024-07-24 22:15:52.921537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.007 [2024-07-24 22:15:52.921577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.007 qpair failed and we were unable to recover it. 00:28:14.007 [2024-07-24 22:15:52.921802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.007 [2024-07-24 22:15:52.921816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.007 qpair failed and we were unable to recover it. 00:28:14.007 [2024-07-24 22:15:52.922114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.007 [2024-07-24 22:15:52.922127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.007 qpair failed and we were unable to recover it. 00:28:14.007 [2024-07-24 22:15:52.922371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.007 [2024-07-24 22:15:52.922384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.007 qpair failed and we were unable to recover it. 00:28:14.007 [2024-07-24 22:15:52.922613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.007 [2024-07-24 22:15:52.922626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.007 qpair failed and we were unable to recover it. 00:28:14.007 [2024-07-24 22:15:52.922941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.007 [2024-07-24 22:15:52.922954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.007 qpair failed and we were unable to recover it. 00:28:14.007 [2024-07-24 22:15:52.923126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.007 [2024-07-24 22:15:52.923139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.007 qpair failed and we were unable to recover it. 00:28:14.007 [2024-07-24 22:15:52.923315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.007 [2024-07-24 22:15:52.923327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.007 qpair failed and we were unable to recover it. 00:28:14.007 [2024-07-24 22:15:52.923497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.007 [2024-07-24 22:15:52.923509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.007 qpair failed and we were unable to recover it. 00:28:14.007 [2024-07-24 22:15:52.923747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.007 [2024-07-24 22:15:52.923760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.007 qpair failed and we were unable to recover it. 00:28:14.007 [2024-07-24 22:15:52.923919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.007 [2024-07-24 22:15:52.923932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.007 qpair failed and we were unable to recover it. 00:28:14.007 [2024-07-24 22:15:52.924150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.007 [2024-07-24 22:15:52.924162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.007 qpair failed and we were unable to recover it. 00:28:14.007 [2024-07-24 22:15:52.924324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.007 [2024-07-24 22:15:52.924336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.007 qpair failed and we were unable to recover it. 00:28:14.007 [2024-07-24 22:15:52.924605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.007 [2024-07-24 22:15:52.924617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.007 qpair failed and we were unable to recover it. 00:28:14.007 [2024-07-24 22:15:52.924939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.007 [2024-07-24 22:15:52.924951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.007 qpair failed and we were unable to recover it. 00:28:14.007 [2024-07-24 22:15:52.925195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.007 [2024-07-24 22:15:52.925236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.007 qpair failed and we were unable to recover it. 00:28:14.007 [2024-07-24 22:15:52.925458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.007 [2024-07-24 22:15:52.925471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.007 qpair failed and we were unable to recover it. 00:28:14.007 [2024-07-24 22:15:52.925622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.007 [2024-07-24 22:15:52.925637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.007 qpair failed and we were unable to recover it. 00:28:14.007 [2024-07-24 22:15:52.925803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.007 [2024-07-24 22:15:52.925816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.007 qpair failed and we were unable to recover it. 00:28:14.007 [2024-07-24 22:15:52.926067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.007 [2024-07-24 22:15:52.926079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.007 qpair failed and we were unable to recover it. 00:28:14.007 [2024-07-24 22:15:52.926312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.007 [2024-07-24 22:15:52.926324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.007 qpair failed and we were unable to recover it. 00:28:14.007 [2024-07-24 22:15:52.926570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.007 [2024-07-24 22:15:52.926582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.007 qpair failed and we were unable to recover it. 00:28:14.007 [2024-07-24 22:15:52.926824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.007 [2024-07-24 22:15:52.926837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.007 qpair failed and we were unable to recover it. 00:28:14.007 [2024-07-24 22:15:52.927082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.007 [2024-07-24 22:15:52.927094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.007 qpair failed and we were unable to recover it. 00:28:14.007 [2024-07-24 22:15:52.927287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.007 [2024-07-24 22:15:52.927328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.007 qpair failed and we were unable to recover it. 00:28:14.007 [2024-07-24 22:15:52.927679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.007 [2024-07-24 22:15:52.927730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.007 qpair failed and we were unable to recover it. 00:28:14.007 [2024-07-24 22:15:52.928019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.007 [2024-07-24 22:15:52.928042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.007 qpair failed and we were unable to recover it. 00:28:14.007 [2024-07-24 22:15:52.928236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.007 [2024-07-24 22:15:52.928260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.007 qpair failed and we were unable to recover it. 00:28:14.007 [2024-07-24 22:15:52.928456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.007 [2024-07-24 22:15:52.928472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.007 qpair failed and we were unable to recover it. 00:28:14.007 [2024-07-24 22:15:52.928649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.007 [2024-07-24 22:15:52.928663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.007 qpair failed and we were unable to recover it. 00:28:14.007 [2024-07-24 22:15:52.928885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.007 [2024-07-24 22:15:52.928900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.007 qpair failed and we were unable to recover it. 00:28:14.007 [2024-07-24 22:15:52.929130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.007 [2024-07-24 22:15:52.929145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.007 qpair failed and we were unable to recover it. 00:28:14.007 [2024-07-24 22:15:52.929341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.007 [2024-07-24 22:15:52.929356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.007 qpair failed and we were unable to recover it. 00:28:14.007 [2024-07-24 22:15:52.929603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.007 [2024-07-24 22:15:52.929618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.007 qpair failed and we were unable to recover it. 00:28:14.007 [2024-07-24 22:15:52.929955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.007 [2024-07-24 22:15:52.929967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.007 qpair failed and we were unable to recover it. 00:28:14.008 [2024-07-24 22:15:52.930208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.008 [2024-07-24 22:15:52.930223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.008 qpair failed and we were unable to recover it. 00:28:14.008 [2024-07-24 22:15:52.930390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.008 [2024-07-24 22:15:52.930403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.008 qpair failed and we were unable to recover it. 00:28:14.008 [2024-07-24 22:15:52.930558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.008 [2024-07-24 22:15:52.930570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.008 qpair failed and we were unable to recover it. 00:28:14.008 [2024-07-24 22:15:52.930828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.008 [2024-07-24 22:15:52.930841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.008 qpair failed and we were unable to recover it. 00:28:14.008 [2024-07-24 22:15:52.931047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.008 [2024-07-24 22:15:52.931059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.008 qpair failed and we were unable to recover it. 00:28:14.008 [2024-07-24 22:15:52.931316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.008 [2024-07-24 22:15:52.931329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.008 qpair failed and we were unable to recover it. 00:28:14.008 [2024-07-24 22:15:52.931564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.008 [2024-07-24 22:15:52.931577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.008 qpair failed and we were unable to recover it. 00:28:14.008 [2024-07-24 22:15:52.931801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.008 [2024-07-24 22:15:52.931815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.008 qpair failed and we were unable to recover it. 00:28:14.008 [2024-07-24 22:15:52.932105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.008 [2024-07-24 22:15:52.932118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.008 qpair failed and we were unable to recover it. 00:28:14.008 [2024-07-24 22:15:52.932410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.008 [2024-07-24 22:15:52.932423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.008 qpair failed and we were unable to recover it. 00:28:14.008 [2024-07-24 22:15:52.932694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.008 [2024-07-24 22:15:52.932707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.008 qpair failed and we were unable to recover it. 00:28:14.008 [2024-07-24 22:15:52.932880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.008 [2024-07-24 22:15:52.932893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.008 qpair failed and we were unable to recover it. 00:28:14.008 [2024-07-24 22:15:52.933116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.008 [2024-07-24 22:15:52.933128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.008 qpair failed and we were unable to recover it. 00:28:14.008 [2024-07-24 22:15:52.933296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.008 [2024-07-24 22:15:52.933309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.008 qpair failed and we were unable to recover it. 00:28:14.008 [2024-07-24 22:15:52.933480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.008 [2024-07-24 22:15:52.933495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.008 qpair failed and we were unable to recover it. 00:28:14.008 [2024-07-24 22:15:52.933787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.008 [2024-07-24 22:15:52.933799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.008 qpair failed and we were unable to recover it. 00:28:14.008 [2024-07-24 22:15:52.934037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.008 [2024-07-24 22:15:52.934054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.008 qpair failed and we were unable to recover it. 00:28:14.008 [2024-07-24 22:15:52.934396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.008 [2024-07-24 22:15:52.934411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.008 qpair failed and we were unable to recover it. 00:28:14.008 [2024-07-24 22:15:52.934583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.008 [2024-07-24 22:15:52.934598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.008 qpair failed and we were unable to recover it. 00:28:14.008 [2024-07-24 22:15:52.934844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.008 [2024-07-24 22:15:52.934857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.008 qpair failed and we were unable to recover it. 00:28:14.008 [2024-07-24 22:15:52.935006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.008 [2024-07-24 22:15:52.935020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.008 qpair failed and we were unable to recover it. 00:28:14.008 [2024-07-24 22:15:52.935313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.008 [2024-07-24 22:15:52.935326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.008 qpair failed and we were unable to recover it. 00:28:14.008 [2024-07-24 22:15:52.935427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.008 [2024-07-24 22:15:52.935439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.008 qpair failed and we were unable to recover it. 00:28:14.008 [2024-07-24 22:15:52.935669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.008 [2024-07-24 22:15:52.935695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.008 qpair failed and we were unable to recover it. 00:28:14.008 [2024-07-24 22:15:52.935982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.008 [2024-07-24 22:15:52.936002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.008 qpair failed and we were unable to recover it. 00:28:14.008 [2024-07-24 22:15:52.936187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.008 [2024-07-24 22:15:52.936202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.008 qpair failed and we were unable to recover it. 00:28:14.008 [2024-07-24 22:15:52.936451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.008 [2024-07-24 22:15:52.936466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.008 qpair failed and we were unable to recover it. 00:28:14.008 [2024-07-24 22:15:52.936742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.008 [2024-07-24 22:15:52.936784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.008 qpair failed and we were unable to recover it. 00:28:14.008 [2024-07-24 22:15:52.937018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.008 [2024-07-24 22:15:52.937059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.008 qpair failed and we were unable to recover it. 00:28:14.008 [2024-07-24 22:15:52.937296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.008 [2024-07-24 22:15:52.937337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.008 qpair failed and we were unable to recover it. 00:28:14.008 [2024-07-24 22:15:52.937689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.008 [2024-07-24 22:15:52.937746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.008 qpair failed and we were unable to recover it. 00:28:14.008 [2024-07-24 22:15:52.938078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.008 [2024-07-24 22:15:52.938091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.008 qpair failed and we were unable to recover it. 00:28:14.008 [2024-07-24 22:15:52.938264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.008 [2024-07-24 22:15:52.938277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.008 qpair failed and we were unable to recover it. 00:28:14.008 [2024-07-24 22:15:52.938500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.008 [2024-07-24 22:15:52.938512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.008 qpair failed and we were unable to recover it. 00:28:14.008 [2024-07-24 22:15:52.938810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.008 [2024-07-24 22:15:52.938823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.008 qpair failed and we were unable to recover it. 00:28:14.008 [2024-07-24 22:15:52.939047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.008 [2024-07-24 22:15:52.939060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.008 qpair failed and we were unable to recover it. 00:28:14.008 [2024-07-24 22:15:52.939300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.008 [2024-07-24 22:15:52.939312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.008 qpair failed and we were unable to recover it. 00:28:14.008 [2024-07-24 22:15:52.939559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.009 [2024-07-24 22:15:52.939572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.009 qpair failed and we were unable to recover it. 00:28:14.009 [2024-07-24 22:15:52.939806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.009 [2024-07-24 22:15:52.939819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.009 qpair failed and we were unable to recover it. 00:28:14.009 [2024-07-24 22:15:52.939980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.009 [2024-07-24 22:15:52.939993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.009 qpair failed and we were unable to recover it. 00:28:14.009 [2024-07-24 22:15:52.940082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.009 [2024-07-24 22:15:52.940096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.009 qpair failed and we were unable to recover it. 00:28:14.009 [2024-07-24 22:15:52.940321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.009 [2024-07-24 22:15:52.940333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.009 qpair failed and we were unable to recover it. 00:28:14.009 [2024-07-24 22:15:52.940488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.009 [2024-07-24 22:15:52.940500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.009 qpair failed and we were unable to recover it. 00:28:14.009 [2024-07-24 22:15:52.940746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.009 [2024-07-24 22:15:52.940759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.009 qpair failed and we were unable to recover it. 00:28:14.009 [2024-07-24 22:15:52.940946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.009 [2024-07-24 22:15:52.940959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.009 qpair failed and we were unable to recover it. 00:28:14.009 [2024-07-24 22:15:52.941182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.009 [2024-07-24 22:15:52.941195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.009 qpair failed and we were unable to recover it. 00:28:14.009 [2024-07-24 22:15:52.941376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.009 [2024-07-24 22:15:52.941389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.009 qpair failed and we were unable to recover it. 00:28:14.009 [2024-07-24 22:15:52.941567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.009 [2024-07-24 22:15:52.941580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.009 qpair failed and we were unable to recover it. 00:28:14.009 [2024-07-24 22:15:52.941806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.009 [2024-07-24 22:15:52.941819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.009 qpair failed and we were unable to recover it. 00:28:14.009 [2024-07-24 22:15:52.942065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.009 [2024-07-24 22:15:52.942077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.009 qpair failed and we were unable to recover it. 00:28:14.009 [2024-07-24 22:15:52.942234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.009 [2024-07-24 22:15:52.942247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.009 qpair failed and we were unable to recover it. 00:28:14.009 [2024-07-24 22:15:52.942489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.009 [2024-07-24 22:15:52.942501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.009 qpair failed and we were unable to recover it. 00:28:14.009 [2024-07-24 22:15:52.942748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.009 [2024-07-24 22:15:52.942761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.009 qpair failed and we were unable to recover it. 00:28:14.009 [2024-07-24 22:15:52.942987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.009 [2024-07-24 22:15:52.943000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.009 qpair failed and we were unable to recover it. 00:28:14.009 [2024-07-24 22:15:52.943245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.009 [2024-07-24 22:15:52.943257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.009 qpair failed and we were unable to recover it. 00:28:14.009 [2024-07-24 22:15:52.943407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.009 [2024-07-24 22:15:52.943420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.009 qpair failed and we were unable to recover it. 00:28:14.009 [2024-07-24 22:15:52.943639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.009 [2024-07-24 22:15:52.943651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.009 qpair failed and we were unable to recover it. 00:28:14.009 [2024-07-24 22:15:52.943827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.009 [2024-07-24 22:15:52.943840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.009 qpair failed and we were unable to recover it. 00:28:14.009 [2024-07-24 22:15:52.944166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.009 [2024-07-24 22:15:52.944178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.009 qpair failed and we were unable to recover it. 00:28:14.009 [2024-07-24 22:15:52.944412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.009 [2024-07-24 22:15:52.944425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.009 qpair failed and we were unable to recover it. 00:28:14.009 [2024-07-24 22:15:52.944660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.009 [2024-07-24 22:15:52.944676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.009 qpair failed and we were unable to recover it. 00:28:14.009 [2024-07-24 22:15:52.944858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.009 [2024-07-24 22:15:52.944871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.009 qpair failed and we were unable to recover it. 00:28:14.009 [2024-07-24 22:15:52.945107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.009 [2024-07-24 22:15:52.945122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.009 qpair failed and we were unable to recover it. 00:28:14.009 [2024-07-24 22:15:52.945367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.009 [2024-07-24 22:15:52.945386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.009 qpair failed and we were unable to recover it. 00:28:14.009 [2024-07-24 22:15:52.945667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.009 [2024-07-24 22:15:52.945688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.009 qpair failed and we were unable to recover it. 00:28:14.009 [2024-07-24 22:15:52.945956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.009 [2024-07-24 22:15:52.945970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.009 qpair failed and we were unable to recover it. 00:28:14.009 [2024-07-24 22:15:52.946238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.009 [2024-07-24 22:15:52.946251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.009 qpair failed and we were unable to recover it. 00:28:14.009 [2024-07-24 22:15:52.946407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.009 [2024-07-24 22:15:52.946419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.009 qpair failed and we were unable to recover it. 00:28:14.010 [2024-07-24 22:15:52.946737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.010 [2024-07-24 22:15:52.946750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.010 qpair failed and we were unable to recover it. 00:28:14.010 [2024-07-24 22:15:52.946975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.010 [2024-07-24 22:15:52.946987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.010 qpair failed and we were unable to recover it. 00:28:14.010 [2024-07-24 22:15:52.947143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.010 [2024-07-24 22:15:52.947156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.010 qpair failed and we were unable to recover it. 00:28:14.010 [2024-07-24 22:15:52.947381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.010 [2024-07-24 22:15:52.947393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.010 qpair failed and we were unable to recover it. 00:28:14.010 [2024-07-24 22:15:52.947546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.010 [2024-07-24 22:15:52.947557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.010 qpair failed and we were unable to recover it. 00:28:14.010 [2024-07-24 22:15:52.947733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.010 [2024-07-24 22:15:52.947746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.010 qpair failed and we were unable to recover it. 00:28:14.010 [2024-07-24 22:15:52.947903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.010 [2024-07-24 22:15:52.947915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.010 qpair failed and we were unable to recover it. 00:28:14.010 [2024-07-24 22:15:52.948069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.010 [2024-07-24 22:15:52.948081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.010 qpair failed and we were unable to recover it. 00:28:14.010 [2024-07-24 22:15:52.948269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.010 [2024-07-24 22:15:52.948281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.010 qpair failed and we were unable to recover it. 00:28:14.010 [2024-07-24 22:15:52.948442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.010 [2024-07-24 22:15:52.948455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.010 qpair failed and we were unable to recover it. 00:28:14.010 [2024-07-24 22:15:52.948709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.010 [2024-07-24 22:15:52.948730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.010 qpair failed and we were unable to recover it. 00:28:14.010 [2024-07-24 22:15:52.948964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.010 [2024-07-24 22:15:52.948976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.010 qpair failed and we were unable to recover it. 00:28:14.010 [2024-07-24 22:15:52.949206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.010 [2024-07-24 22:15:52.949252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.010 qpair failed and we were unable to recover it. 00:28:14.010 [2024-07-24 22:15:52.949570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.010 [2024-07-24 22:15:52.949611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.010 qpair failed and we were unable to recover it. 00:28:14.010 [2024-07-24 22:15:52.950693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.010 [2024-07-24 22:15:52.950729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.010 qpair failed and we were unable to recover it. 00:28:14.010 [2024-07-24 22:15:52.951034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.010 [2024-07-24 22:15:52.951047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.010 qpair failed and we were unable to recover it. 00:28:14.010 [2024-07-24 22:15:52.951257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.010 [2024-07-24 22:15:52.951270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.010 qpair failed and we were unable to recover it. 00:28:14.010 [2024-07-24 22:15:52.951518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.010 [2024-07-24 22:15:52.951530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.010 qpair failed and we were unable to recover it. 00:28:14.010 [2024-07-24 22:15:52.951770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.010 [2024-07-24 22:15:52.951783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.010 qpair failed and we were unable to recover it. 00:28:14.010 [2024-07-24 22:15:52.951955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.010 [2024-07-24 22:15:52.951967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.010 qpair failed and we were unable to recover it. 00:28:14.010 [2024-07-24 22:15:52.952263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.010 [2024-07-24 22:15:52.952303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.010 qpair failed and we were unable to recover it. 00:28:14.010 [2024-07-24 22:15:52.952531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.010 [2024-07-24 22:15:52.952572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.010 qpair failed and we were unable to recover it. 00:28:14.010 [2024-07-24 22:15:52.952846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.010 [2024-07-24 22:15:52.952858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.010 qpair failed and we were unable to recover it. 00:28:14.010 [2024-07-24 22:15:52.953051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.010 [2024-07-24 22:15:52.953063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.010 qpair failed and we were unable to recover it. 00:28:14.010 [2024-07-24 22:15:52.953231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.010 [2024-07-24 22:15:52.953243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.010 qpair failed and we were unable to recover it. 00:28:14.010 [2024-07-24 22:15:52.953431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.010 [2024-07-24 22:15:52.953444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.010 qpair failed and we were unable to recover it. 00:28:14.010 [2024-07-24 22:15:52.953668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.010 [2024-07-24 22:15:52.953680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.010 qpair failed and we were unable to recover it. 00:28:14.010 [2024-07-24 22:15:52.954006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.010 [2024-07-24 22:15:52.954047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.010 qpair failed and we were unable to recover it. 00:28:14.010 [2024-07-24 22:15:52.954332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.010 [2024-07-24 22:15:52.954372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.010 qpair failed and we were unable to recover it. 00:28:14.010 [2024-07-24 22:15:52.954677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.010 [2024-07-24 22:15:52.954736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.010 qpair failed and we were unable to recover it. 00:28:14.010 [2024-07-24 22:15:52.955056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.010 [2024-07-24 22:15:52.955096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.010 qpair failed and we were unable to recover it. 00:28:14.010 [2024-07-24 22:15:52.955393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.010 [2024-07-24 22:15:52.955433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.010 qpair failed and we were unable to recover it. 00:28:14.010 [2024-07-24 22:15:52.955704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.010 [2024-07-24 22:15:52.955722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.010 qpair failed and we were unable to recover it. 00:28:14.010 [2024-07-24 22:15:52.955964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.010 [2024-07-24 22:15:52.955976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.010 qpair failed and we were unable to recover it. 00:28:14.010 [2024-07-24 22:15:52.956238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.010 [2024-07-24 22:15:52.956250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.010 qpair failed and we were unable to recover it. 00:28:14.010 [2024-07-24 22:15:52.956478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.010 [2024-07-24 22:15:52.956517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.010 qpair failed and we were unable to recover it. 00:28:14.010 [2024-07-24 22:15:52.956813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.010 [2024-07-24 22:15:52.956855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.010 qpair failed and we were unable to recover it. 00:28:14.010 [2024-07-24 22:15:52.957073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.011 [2024-07-24 22:15:52.957113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.011 qpair failed and we were unable to recover it. 00:28:14.011 [2024-07-24 22:15:52.957424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.011 [2024-07-24 22:15:52.957464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.011 qpair failed and we were unable to recover it. 00:28:14.011 [2024-07-24 22:15:52.957770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.011 [2024-07-24 22:15:52.957783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.011 qpair failed and we were unable to recover it. 00:28:14.011 [2024-07-24 22:15:52.957944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.011 [2024-07-24 22:15:52.957956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.011 qpair failed and we were unable to recover it. 00:28:14.011 [2024-07-24 22:15:52.958185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.011 [2024-07-24 22:15:52.958226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.011 qpair failed and we were unable to recover it. 00:28:14.011 [2024-07-24 22:15:52.958523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.011 [2024-07-24 22:15:52.958563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.011 qpair failed and we were unable to recover it. 00:28:14.011 [2024-07-24 22:15:52.958856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.011 [2024-07-24 22:15:52.958868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.011 qpair failed and we were unable to recover it. 00:28:14.011 [2024-07-24 22:15:52.959115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.011 [2024-07-24 22:15:52.959127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.011 qpair failed and we were unable to recover it. 00:28:14.011 [2024-07-24 22:15:52.959389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.011 [2024-07-24 22:15:52.959401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.011 qpair failed and we were unable to recover it. 00:28:14.011 [2024-07-24 22:15:52.959573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.011 [2024-07-24 22:15:52.959587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.011 qpair failed and we were unable to recover it. 00:28:14.011 [2024-07-24 22:15:52.959761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.011 [2024-07-24 22:15:52.959802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.011 qpair failed and we were unable to recover it. 00:28:14.011 [2024-07-24 22:15:52.960124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.011 [2024-07-24 22:15:52.960167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.011 qpair failed and we were unable to recover it. 00:28:14.011 [2024-07-24 22:15:52.960511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.011 [2024-07-24 22:15:52.960523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.011 qpair failed and we were unable to recover it. 00:28:14.011 [2024-07-24 22:15:52.960704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.011 [2024-07-24 22:15:52.960721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.011 qpair failed and we were unable to recover it. 00:28:14.011 [2024-07-24 22:15:52.960892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.011 [2024-07-24 22:15:52.960904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.011 qpair failed and we were unable to recover it. 00:28:14.011 [2024-07-24 22:15:52.961051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.011 [2024-07-24 22:15:52.961064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.011 qpair failed and we were unable to recover it. 00:28:14.011 [2024-07-24 22:15:52.961282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.011 [2024-07-24 22:15:52.961293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.011 qpair failed and we were unable to recover it. 00:28:14.011 [2024-07-24 22:15:52.961385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.011 [2024-07-24 22:15:52.961397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.011 qpair failed and we were unable to recover it. 00:28:14.011 [2024-07-24 22:15:52.961577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.011 [2024-07-24 22:15:52.961589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.011 qpair failed and we were unable to recover it. 00:28:14.011 [2024-07-24 22:15:52.961754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.011 [2024-07-24 22:15:52.961767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.011 qpair failed and we were unable to recover it. 00:28:14.011 [2024-07-24 22:15:52.961935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.011 [2024-07-24 22:15:52.961947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.011 qpair failed and we were unable to recover it. 00:28:14.011 [2024-07-24 22:15:52.962136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.011 [2024-07-24 22:15:52.962147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.011 qpair failed and we were unable to recover it. 00:28:14.011 [2024-07-24 22:15:52.962451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.011 [2024-07-24 22:15:52.962462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.011 qpair failed and we were unable to recover it. 00:28:14.011 [2024-07-24 22:15:52.962612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.011 [2024-07-24 22:15:52.962624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.011 qpair failed and we were unable to recover it. 00:28:14.011 [2024-07-24 22:15:52.962781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.011 [2024-07-24 22:15:52.962793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.011 qpair failed and we were unable to recover it. 00:28:14.011 [2024-07-24 22:15:52.963110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.011 [2024-07-24 22:15:52.963123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.011 qpair failed and we were unable to recover it. 00:28:14.011 [2024-07-24 22:15:52.963360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.011 [2024-07-24 22:15:52.963372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.011 qpair failed and we were unable to recover it. 00:28:14.011 [2024-07-24 22:15:52.963553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.011 [2024-07-24 22:15:52.963565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.011 qpair failed and we were unable to recover it. 00:28:14.011 [2024-07-24 22:15:52.963804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.011 [2024-07-24 22:15:52.963816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.011 qpair failed and we were unable to recover it. 00:28:14.011 [2024-07-24 22:15:52.963987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.011 [2024-07-24 22:15:52.963999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.011 qpair failed and we were unable to recover it. 00:28:14.011 [2024-07-24 22:15:52.964217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.011 [2024-07-24 22:15:52.964229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.011 qpair failed and we were unable to recover it. 00:28:14.011 [2024-07-24 22:15:52.964459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.011 [2024-07-24 22:15:52.964471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.011 qpair failed and we were unable to recover it. 00:28:14.011 [2024-07-24 22:15:52.964710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.011 [2024-07-24 22:15:52.964729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.011 qpair failed and we were unable to recover it. 00:28:14.011 [2024-07-24 22:15:52.964945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.011 [2024-07-24 22:15:52.964958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.011 qpair failed and we were unable to recover it. 00:28:14.011 [2024-07-24 22:15:52.965174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.011 [2024-07-24 22:15:52.965186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.011 qpair failed and we were unable to recover it. 00:28:14.011 [2024-07-24 22:15:52.965349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.011 [2024-07-24 22:15:52.965361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.011 qpair failed and we were unable to recover it. 00:28:14.011 [2024-07-24 22:15:52.965555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.011 [2024-07-24 22:15:52.965568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.011 qpair failed and we were unable to recover it. 00:28:14.011 [2024-07-24 22:15:52.965793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.011 [2024-07-24 22:15:52.965813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.011 qpair failed and we were unable to recover it. 00:28:14.012 [2024-07-24 22:15:52.965987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.012 [2024-07-24 22:15:52.966000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.012 qpair failed and we were unable to recover it. 00:28:14.012 [2024-07-24 22:15:52.966325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.012 [2024-07-24 22:15:52.966337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.012 qpair failed and we were unable to recover it. 00:28:14.012 [2024-07-24 22:15:52.966518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.012 [2024-07-24 22:15:52.966530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.012 qpair failed and we were unable to recover it. 00:28:14.012 [2024-07-24 22:15:52.966769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.012 [2024-07-24 22:15:52.966782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.012 qpair failed and we were unable to recover it. 00:28:14.012 [2024-07-24 22:15:52.966946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.012 [2024-07-24 22:15:52.966957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.012 qpair failed and we were unable to recover it. 00:28:14.012 [2024-07-24 22:15:52.967108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.012 [2024-07-24 22:15:52.967119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.012 qpair failed and we were unable to recover it. 00:28:14.012 [2024-07-24 22:15:52.967304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.012 [2024-07-24 22:15:52.967316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.012 qpair failed and we were unable to recover it. 00:28:14.012 [2024-07-24 22:15:52.967571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.012 [2024-07-24 22:15:52.967599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.012 qpair failed and we were unable to recover it. 00:28:14.012 [2024-07-24 22:15:52.967755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.012 [2024-07-24 22:15:52.967766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.012 qpair failed and we were unable to recover it. 00:28:14.012 [2024-07-24 22:15:52.967927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.012 [2024-07-24 22:15:52.967938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.012 qpair failed and we were unable to recover it. 00:28:14.012 [2024-07-24 22:15:52.968179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.012 [2024-07-24 22:15:52.968192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.012 qpair failed and we were unable to recover it. 00:28:14.012 [2024-07-24 22:15:52.968432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.012 [2024-07-24 22:15:52.968445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.012 qpair failed and we were unable to recover it. 00:28:14.012 [2024-07-24 22:15:52.968665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.012 [2024-07-24 22:15:52.968694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.012 qpair failed and we were unable to recover it. 00:28:14.012 [2024-07-24 22:15:52.968876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.012 [2024-07-24 22:15:52.968917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.012 qpair failed and we were unable to recover it. 00:28:14.012 [2024-07-24 22:15:52.969125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.012 [2024-07-24 22:15:52.969165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.012 qpair failed and we were unable to recover it. 00:28:14.012 [2024-07-24 22:15:52.969377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.012 [2024-07-24 22:15:52.969416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.012 qpair failed and we were unable to recover it. 00:28:14.012 [2024-07-24 22:15:52.969623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.012 [2024-07-24 22:15:52.969635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.012 qpair failed and we were unable to recover it. 00:28:14.012 [2024-07-24 22:15:52.969902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.012 [2024-07-24 22:15:52.969917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.012 qpair failed and we were unable to recover it. 00:28:14.012 [2024-07-24 22:15:52.970090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.012 [2024-07-24 22:15:52.970103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.012 qpair failed and we were unable to recover it. 00:28:14.012 [2024-07-24 22:15:52.970194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.012 [2024-07-24 22:15:52.970205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.012 qpair failed and we were unable to recover it. 00:28:14.012 [2024-07-24 22:15:52.970367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.012 [2024-07-24 22:15:52.970378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.012 qpair failed and we were unable to recover it. 00:28:14.012 [2024-07-24 22:15:52.970665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.012 [2024-07-24 22:15:52.970677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.012 qpair failed and we were unable to recover it. 00:28:14.012 [2024-07-24 22:15:52.970895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.012 [2024-07-24 22:15:52.970907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.012 qpair failed and we were unable to recover it. 00:28:14.012 [2024-07-24 22:15:52.971069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.012 [2024-07-24 22:15:52.971081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.012 qpair failed and we were unable to recover it. 00:28:14.012 [2024-07-24 22:15:52.971367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.012 [2024-07-24 22:15:52.971379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.012 qpair failed and we were unable to recover it. 00:28:14.012 [2024-07-24 22:15:52.971596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.012 [2024-07-24 22:15:52.971609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.012 qpair failed and we were unable to recover it. 00:28:14.012 [2024-07-24 22:15:52.971775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.012 [2024-07-24 22:15:52.971787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.012 qpair failed and we were unable to recover it. 00:28:14.012 [2024-07-24 22:15:52.972031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.012 [2024-07-24 22:15:52.972043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.012 qpair failed and we were unable to recover it. 00:28:14.012 [2024-07-24 22:15:52.972231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.012 [2024-07-24 22:15:52.972243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.012 qpair failed and we were unable to recover it. 00:28:14.012 [2024-07-24 22:15:52.972463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.012 [2024-07-24 22:15:52.972476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.012 qpair failed and we were unable to recover it. 00:28:14.012 [2024-07-24 22:15:52.972711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.012 [2024-07-24 22:15:52.972731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.012 qpair failed and we were unable to recover it. 00:28:14.012 [2024-07-24 22:15:52.972952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.012 [2024-07-24 22:15:52.972964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.012 qpair failed and we were unable to recover it. 00:28:14.012 [2024-07-24 22:15:52.973182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.012 [2024-07-24 22:15:52.973194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.012 qpair failed and we were unable to recover it. 00:28:14.012 [2024-07-24 22:15:52.973417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.012 [2024-07-24 22:15:52.973429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.012 qpair failed and we were unable to recover it. 00:28:14.012 [2024-07-24 22:15:52.973683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.012 [2024-07-24 22:15:52.973695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.012 qpair failed and we were unable to recover it. 00:28:14.012 [2024-07-24 22:15:52.973959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.012 [2024-07-24 22:15:52.973972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.012 qpair failed and we were unable to recover it. 00:28:14.012 [2024-07-24 22:15:52.974140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.012 [2024-07-24 22:15:52.974152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.012 qpair failed and we were unable to recover it. 00:28:14.012 [2024-07-24 22:15:52.974391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.013 [2024-07-24 22:15:52.974403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.013 qpair failed and we were unable to recover it. 00:28:14.013 [2024-07-24 22:15:52.974575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.013 [2024-07-24 22:15:52.974587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.013 qpair failed and we were unable to recover it. 00:28:14.013 [2024-07-24 22:15:52.974803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.013 [2024-07-24 22:15:52.974816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.013 qpair failed and we were unable to recover it. 00:28:14.013 [2024-07-24 22:15:52.975042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.013 [2024-07-24 22:15:52.975055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.013 qpair failed and we were unable to recover it. 00:28:14.013 [2024-07-24 22:15:52.975272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.013 [2024-07-24 22:15:52.975285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.013 qpair failed and we were unable to recover it. 00:28:14.013 [2024-07-24 22:15:52.975518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.013 [2024-07-24 22:15:52.975531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.013 qpair failed and we were unable to recover it. 00:28:14.013 [2024-07-24 22:15:52.975629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.013 [2024-07-24 22:15:52.975640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.013 qpair failed and we were unable to recover it. 00:28:14.013 [2024-07-24 22:15:52.975870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.013 [2024-07-24 22:15:52.975883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.013 qpair failed and we were unable to recover it. 00:28:14.013 [2024-07-24 22:15:52.976061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.013 [2024-07-24 22:15:52.976074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.013 qpair failed and we were unable to recover it. 00:28:14.013 [2024-07-24 22:15:52.976323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.013 [2024-07-24 22:15:52.976335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.013 qpair failed and we were unable to recover it. 00:28:14.013 [2024-07-24 22:15:52.976507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.013 [2024-07-24 22:15:52.976518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.013 qpair failed and we were unable to recover it. 00:28:14.013 [2024-07-24 22:15:52.976826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.013 [2024-07-24 22:15:52.976839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.013 qpair failed and we were unable to recover it. 00:28:14.013 [2024-07-24 22:15:52.977149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.013 [2024-07-24 22:15:52.977161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.013 qpair failed and we were unable to recover it. 00:28:14.013 [2024-07-24 22:15:52.977383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.013 [2024-07-24 22:15:52.977395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.013 qpair failed and we were unable to recover it. 00:28:14.013 [2024-07-24 22:15:52.977562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.013 [2024-07-24 22:15:52.977573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.013 qpair failed and we were unable to recover it. 00:28:14.013 [2024-07-24 22:15:52.977733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.013 [2024-07-24 22:15:52.977745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.013 qpair failed and we were unable to recover it. 00:28:14.013 [2024-07-24 22:15:52.978031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.013 [2024-07-24 22:15:52.978044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.013 qpair failed and we were unable to recover it. 00:28:14.013 [2024-07-24 22:15:52.978275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.013 [2024-07-24 22:15:52.978287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.013 qpair failed and we were unable to recover it. 00:28:14.013 [2024-07-24 22:15:52.978517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.013 [2024-07-24 22:15:52.978529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.013 qpair failed and we were unable to recover it. 00:28:14.013 [2024-07-24 22:15:52.978680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.013 [2024-07-24 22:15:52.978691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.013 qpair failed and we were unable to recover it. 00:28:14.013 [2024-07-24 22:15:52.978916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.013 [2024-07-24 22:15:52.978931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.013 qpair failed and we were unable to recover it. 00:28:14.013 [2024-07-24 22:15:52.979164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.013 [2024-07-24 22:15:52.979176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.013 qpair failed and we were unable to recover it. 00:28:14.013 [2024-07-24 22:15:52.979405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.013 [2024-07-24 22:15:52.979417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.013 qpair failed and we were unable to recover it. 00:28:14.013 [2024-07-24 22:15:52.979634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.013 [2024-07-24 22:15:52.979646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.013 qpair failed and we were unable to recover it. 00:28:14.013 [2024-07-24 22:15:52.979797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.013 [2024-07-24 22:15:52.979810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.013 qpair failed and we were unable to recover it. 00:28:14.013 [2024-07-24 22:15:52.980034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.013 [2024-07-24 22:15:52.980046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.013 qpair failed and we were unable to recover it. 00:28:14.013 [2024-07-24 22:15:52.980217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.013 [2024-07-24 22:15:52.980229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.013 qpair failed and we were unable to recover it. 00:28:14.013 [2024-07-24 22:15:52.980517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.013 [2024-07-24 22:15:52.980529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.013 qpair failed and we were unable to recover it. 00:28:14.013 [2024-07-24 22:15:52.980771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.013 [2024-07-24 22:15:52.980783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.013 qpair failed and we were unable to recover it. 00:28:14.013 [2024-07-24 22:15:52.981092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.013 [2024-07-24 22:15:52.981105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.013 qpair failed and we were unable to recover it. 00:28:14.013 [2024-07-24 22:15:52.981412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.013 [2024-07-24 22:15:52.981424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.013 qpair failed and we were unable to recover it. 00:28:14.013 [2024-07-24 22:15:52.981670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.013 [2024-07-24 22:15:52.981682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.013 qpair failed and we were unable to recover it. 00:28:14.013 [2024-07-24 22:15:52.981995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.013 [2024-07-24 22:15:52.982007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.013 qpair failed and we were unable to recover it. 00:28:14.013 [2024-07-24 22:15:52.982194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.013 [2024-07-24 22:15:52.982207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.013 qpair failed and we were unable to recover it. 00:28:14.013 [2024-07-24 22:15:52.982457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.013 [2024-07-24 22:15:52.982470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.013 qpair failed and we were unable to recover it. 00:28:14.013 [2024-07-24 22:15:52.982706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.013 [2024-07-24 22:15:52.982723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.013 qpair failed and we were unable to recover it. 00:28:14.013 [2024-07-24 22:15:52.982886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.013 [2024-07-24 22:15:52.982898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.013 qpair failed and we were unable to recover it. 00:28:14.013 [2024-07-24 22:15:52.983141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.014 [2024-07-24 22:15:52.983153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.014 qpair failed and we were unable to recover it. 00:28:14.014 [2024-07-24 22:15:52.983387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.014 [2024-07-24 22:15:52.983399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.014 qpair failed and we were unable to recover it. 00:28:14.014 [2024-07-24 22:15:52.983685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.014 [2024-07-24 22:15:52.983697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.014 qpair failed and we were unable to recover it. 00:28:14.014 [2024-07-24 22:15:52.983865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.014 [2024-07-24 22:15:52.983877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.014 qpair failed and we were unable to recover it. 00:28:14.014 [2024-07-24 22:15:52.984041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.014 [2024-07-24 22:15:52.984053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.014 qpair failed and we were unable to recover it. 00:28:14.014 [2024-07-24 22:15:52.984223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.014 [2024-07-24 22:15:52.984235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.014 qpair failed and we were unable to recover it. 00:28:14.014 [2024-07-24 22:15:52.984403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.014 [2024-07-24 22:15:52.984415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.014 qpair failed and we were unable to recover it. 00:28:14.014 [2024-07-24 22:15:52.984577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.014 [2024-07-24 22:15:52.984589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.014 qpair failed and we were unable to recover it. 00:28:14.014 [2024-07-24 22:15:52.984757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.014 [2024-07-24 22:15:52.984769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.014 qpair failed and we were unable to recover it. 00:28:14.014 [2024-07-24 22:15:52.984994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.014 [2024-07-24 22:15:52.985006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.014 qpair failed and we were unable to recover it. 00:28:14.014 [2024-07-24 22:15:52.985095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.014 [2024-07-24 22:15:52.985107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.014 qpair failed and we were unable to recover it. 00:28:14.014 [2024-07-24 22:15:52.985394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.014 [2024-07-24 22:15:52.985406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.014 qpair failed and we were unable to recover it. 00:28:14.014 [2024-07-24 22:15:52.985657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.014 [2024-07-24 22:15:52.985668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.014 qpair failed and we were unable to recover it. 00:28:14.014 [2024-07-24 22:15:52.985984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.014 [2024-07-24 22:15:52.985997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.014 qpair failed and we were unable to recover it. 00:28:14.014 [2024-07-24 22:15:52.986161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.014 [2024-07-24 22:15:52.986173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.014 qpair failed and we were unable to recover it. 00:28:14.014 [2024-07-24 22:15:52.986390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.014 [2024-07-24 22:15:52.986402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.014 qpair failed and we were unable to recover it. 00:28:14.014 [2024-07-24 22:15:52.986565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.014 [2024-07-24 22:15:52.986576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.014 qpair failed and we were unable to recover it. 00:28:14.014 [2024-07-24 22:15:52.986883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.014 [2024-07-24 22:15:52.986895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.014 qpair failed and we were unable to recover it. 00:28:14.014 [2024-07-24 22:15:52.987145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.014 [2024-07-24 22:15:52.987157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.014 qpair failed and we were unable to recover it. 00:28:14.014 [2024-07-24 22:15:52.987390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.014 [2024-07-24 22:15:52.987402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.014 qpair failed and we were unable to recover it. 00:28:14.014 [2024-07-24 22:15:52.987580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.014 [2024-07-24 22:15:52.987592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.014 qpair failed and we were unable to recover it. 00:28:14.014 [2024-07-24 22:15:52.987832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.014 [2024-07-24 22:15:52.987845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.014 qpair failed and we were unable to recover it. 00:28:14.014 [2024-07-24 22:15:52.988059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.014 [2024-07-24 22:15:52.988072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.014 qpair failed and we were unable to recover it. 00:28:14.014 [2024-07-24 22:15:52.988312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.014 [2024-07-24 22:15:52.988326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.014 qpair failed and we were unable to recover it. 00:28:14.014 [2024-07-24 22:15:52.988563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.014 [2024-07-24 22:15:52.988575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.014 qpair failed and we were unable to recover it. 00:28:14.014 [2024-07-24 22:15:52.988827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.014 [2024-07-24 22:15:52.988839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.014 qpair failed and we were unable to recover it. 00:28:14.014 [2024-07-24 22:15:52.989073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.014 [2024-07-24 22:15:52.989085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.014 qpair failed and we were unable to recover it. 00:28:14.014 [2024-07-24 22:15:52.989301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.014 [2024-07-24 22:15:52.989313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.014 qpair failed and we were unable to recover it. 00:28:14.014 [2024-07-24 22:15:52.989648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.014 [2024-07-24 22:15:52.989661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.014 qpair failed and we were unable to recover it. 00:28:14.014 [2024-07-24 22:15:52.989898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.014 [2024-07-24 22:15:52.989911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.014 qpair failed and we were unable to recover it. 00:28:14.014 [2024-07-24 22:15:52.990124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.014 [2024-07-24 22:15:52.990136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.014 qpair failed and we were unable to recover it. 00:28:14.014 [2024-07-24 22:15:52.990300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.014 [2024-07-24 22:15:52.990312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.014 qpair failed and we were unable to recover it. 00:28:14.015 [2024-07-24 22:15:52.990544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.015 [2024-07-24 22:15:52.990556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.015 qpair failed and we were unable to recover it. 00:28:14.015 [2024-07-24 22:15:52.990742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.015 [2024-07-24 22:15:52.990754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.015 qpair failed and we were unable to recover it. 00:28:14.015 [2024-07-24 22:15:52.990974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.015 [2024-07-24 22:15:52.990987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.015 qpair failed and we were unable to recover it. 00:28:14.015 [2024-07-24 22:15:52.991232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.015 [2024-07-24 22:15:52.991244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.015 qpair failed and we were unable to recover it. 00:28:14.015 [2024-07-24 22:15:52.991480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.015 [2024-07-24 22:15:52.991492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.015 qpair failed and we were unable to recover it. 00:28:14.015 [2024-07-24 22:15:52.991721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.015 [2024-07-24 22:15:52.991734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.015 qpair failed and we were unable to recover it. 00:28:14.015 [2024-07-24 22:15:52.991969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.015 [2024-07-24 22:15:52.991981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.015 qpair failed and we were unable to recover it. 00:28:14.015 [2024-07-24 22:15:52.992215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.015 [2024-07-24 22:15:52.992227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.015 qpair failed and we were unable to recover it. 00:28:14.015 [2024-07-24 22:15:52.992456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.015 [2024-07-24 22:15:52.992469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.015 qpair failed and we were unable to recover it. 00:28:14.015 [2024-07-24 22:15:52.992698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.015 [2024-07-24 22:15:52.992709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.015 qpair failed and we were unable to recover it. 00:28:14.015 [2024-07-24 22:15:52.992895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.015 [2024-07-24 22:15:52.992907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.015 qpair failed and we were unable to recover it. 00:28:14.015 [2024-07-24 22:15:52.993211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.015 [2024-07-24 22:15:52.993223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.015 qpair failed and we were unable to recover it. 00:28:14.015 [2024-07-24 22:15:52.993442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.015 [2024-07-24 22:15:52.993454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.015 qpair failed and we were unable to recover it. 00:28:14.015 [2024-07-24 22:15:52.993710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.015 [2024-07-24 22:15:52.993737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.015 qpair failed and we were unable to recover it. 00:28:14.015 [2024-07-24 22:15:52.993981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.015 [2024-07-24 22:15:52.993993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.015 qpair failed and we were unable to recover it. 00:28:14.015 [2024-07-24 22:15:52.994214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.015 [2024-07-24 22:15:52.994226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.015 qpair failed and we were unable to recover it. 00:28:14.015 [2024-07-24 22:15:52.994469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.015 [2024-07-24 22:15:52.994481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.015 qpair failed and we were unable to recover it. 00:28:14.015 [2024-07-24 22:15:52.994645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.015 [2024-07-24 22:15:52.994657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.015 qpair failed and we were unable to recover it. 00:28:14.015 [2024-07-24 22:15:52.994967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.015 [2024-07-24 22:15:52.994980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.015 qpair failed and we were unable to recover it. 00:28:14.015 [2024-07-24 22:15:52.995313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.015 [2024-07-24 22:15:52.995325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.015 qpair failed and we were unable to recover it. 00:28:14.015 [2024-07-24 22:15:52.995494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.015 [2024-07-24 22:15:52.995506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.015 qpair failed and we were unable to recover it. 00:28:14.015 [2024-07-24 22:15:52.995726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.015 [2024-07-24 22:15:52.995739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.015 qpair failed and we were unable to recover it. 00:28:14.015 [2024-07-24 22:15:52.995969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.015 [2024-07-24 22:15:52.995981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.015 qpair failed and we were unable to recover it. 00:28:14.015 [2024-07-24 22:15:52.996219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.015 [2024-07-24 22:15:52.996231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.015 qpair failed and we were unable to recover it. 00:28:14.015 [2024-07-24 22:15:52.996461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.015 [2024-07-24 22:15:52.996473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.015 qpair failed and we were unable to recover it. 00:28:14.015 [2024-07-24 22:15:52.996718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.015 [2024-07-24 22:15:52.996730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.015 qpair failed and we were unable to recover it. 00:28:14.015 [2024-07-24 22:15:52.996882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.015 [2024-07-24 22:15:52.996894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.015 qpair failed and we were unable to recover it. 00:28:14.015 [2024-07-24 22:15:52.997136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.015 [2024-07-24 22:15:52.997148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.015 qpair failed and we were unable to recover it. 00:28:14.015 [2024-07-24 22:15:52.997395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.015 [2024-07-24 22:15:52.997407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.015 qpair failed and we were unable to recover it. 00:28:14.015 [2024-07-24 22:15:52.997636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.015 [2024-07-24 22:15:52.997648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.015 qpair failed and we were unable to recover it. 00:28:14.015 [2024-07-24 22:15:52.997818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.015 [2024-07-24 22:15:52.997830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.015 qpair failed and we were unable to recover it. 00:28:14.015 [2024-07-24 22:15:52.998158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.015 [2024-07-24 22:15:52.998174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.015 qpair failed and we were unable to recover it. 00:28:14.015 [2024-07-24 22:15:52.998356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.015 [2024-07-24 22:15:52.998367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.015 qpair failed and we were unable to recover it. 00:28:14.015 [2024-07-24 22:15:52.998532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.015 [2024-07-24 22:15:52.998544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.015 qpair failed and we were unable to recover it. 00:28:14.015 [2024-07-24 22:15:52.998701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.015 [2024-07-24 22:15:52.998713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.015 qpair failed and we were unable to recover it. 00:28:14.015 [2024-07-24 22:15:52.999021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.015 [2024-07-24 22:15:52.999034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.015 qpair failed and we were unable to recover it. 00:28:14.015 [2024-07-24 22:15:52.999250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.015 [2024-07-24 22:15:52.999262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.015 qpair failed and we were unable to recover it. 00:28:14.015 [2024-07-24 22:15:52.999347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.016 [2024-07-24 22:15:52.999358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.016 qpair failed and we were unable to recover it. 00:28:14.016 [2024-07-24 22:15:52.999596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.016 [2024-07-24 22:15:52.999608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.016 qpair failed and we were unable to recover it. 00:28:14.016 [2024-07-24 22:15:52.999844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.016 [2024-07-24 22:15:52.999856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.016 qpair failed and we were unable to recover it. 00:28:14.016 [2024-07-24 22:15:53.000160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.016 [2024-07-24 22:15:53.000172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.016 qpair failed and we were unable to recover it. 00:28:14.016 [2024-07-24 22:15:53.000386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.016 [2024-07-24 22:15:53.000398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.016 qpair failed and we were unable to recover it. 00:28:14.016 [2024-07-24 22:15:53.000615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.016 [2024-07-24 22:15:53.000627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.016 qpair failed and we were unable to recover it. 00:28:14.016 [2024-07-24 22:15:53.000781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.016 [2024-07-24 22:15:53.000793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.016 qpair failed and we were unable to recover it. 00:28:14.016 [2024-07-24 22:15:53.001028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.016 [2024-07-24 22:15:53.001040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.016 qpair failed and we were unable to recover it. 00:28:14.016 [2024-07-24 22:15:53.001278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.016 [2024-07-24 22:15:53.001290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.016 qpair failed and we were unable to recover it. 00:28:14.016 [2024-07-24 22:15:53.001575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.016 [2024-07-24 22:15:53.001587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.016 qpair failed and we were unable to recover it. 00:28:14.016 [2024-07-24 22:15:53.001804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.016 [2024-07-24 22:15:53.001817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.016 qpair failed and we were unable to recover it. 00:28:14.016 [2024-07-24 22:15:53.002045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.016 [2024-07-24 22:15:53.002057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.016 qpair failed and we were unable to recover it. 00:28:14.016 [2024-07-24 22:15:53.002273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.016 [2024-07-24 22:15:53.002285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.016 qpair failed and we were unable to recover it. 00:28:14.016 [2024-07-24 22:15:53.002618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.016 [2024-07-24 22:15:53.002630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.016 qpair failed and we were unable to recover it. 00:28:14.016 [2024-07-24 22:15:53.002870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.016 [2024-07-24 22:15:53.002883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.016 qpair failed and we were unable to recover it. 00:28:14.016 [2024-07-24 22:15:53.003117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.016 [2024-07-24 22:15:53.003129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.016 qpair failed and we were unable to recover it. 00:28:14.016 [2024-07-24 22:15:53.003343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.016 [2024-07-24 22:15:53.003355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.016 qpair failed and we were unable to recover it. 00:28:14.016 [2024-07-24 22:15:53.003550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.016 [2024-07-24 22:15:53.003562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.016 qpair failed and we were unable to recover it. 00:28:14.016 [2024-07-24 22:15:53.003798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.016 [2024-07-24 22:15:53.003810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.016 qpair failed and we were unable to recover it. 00:28:14.016 [2024-07-24 22:15:53.004032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.016 [2024-07-24 22:15:53.004044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.016 qpair failed and we were unable to recover it. 00:28:14.016 [2024-07-24 22:15:53.004201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.016 [2024-07-24 22:15:53.004213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.016 qpair failed and we were unable to recover it. 00:28:14.016 [2024-07-24 22:15:53.004498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.016 [2024-07-24 22:15:53.004510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.016 qpair failed and we were unable to recover it. 00:28:14.016 [2024-07-24 22:15:53.004755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.016 [2024-07-24 22:15:53.004768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.016 qpair failed and we were unable to recover it. 00:28:14.016 [2024-07-24 22:15:53.004930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.016 [2024-07-24 22:15:53.004942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.016 qpair failed and we were unable to recover it. 00:28:14.016 [2024-07-24 22:15:53.005200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.016 [2024-07-24 22:15:53.005212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.016 qpair failed and we were unable to recover it. 00:28:14.016 [2024-07-24 22:15:53.005454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.016 [2024-07-24 22:15:53.005466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.016 qpair failed and we were unable to recover it. 00:28:14.016 [2024-07-24 22:15:53.005721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.016 [2024-07-24 22:15:53.005733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.016 qpair failed and we were unable to recover it. 00:28:14.016 [2024-07-24 22:15:53.005841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.016 [2024-07-24 22:15:53.005853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.016 qpair failed and we were unable to recover it. 00:28:14.016 [2024-07-24 22:15:53.006021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.016 [2024-07-24 22:15:53.006033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.016 qpair failed and we were unable to recover it. 00:28:14.016 [2024-07-24 22:15:53.006133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.016 [2024-07-24 22:15:53.006145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.016 qpair failed and we were unable to recover it. 00:28:14.016 [2024-07-24 22:15:53.006373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.016 [2024-07-24 22:15:53.006385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.016 qpair failed and we were unable to recover it. 00:28:14.016 [2024-07-24 22:15:53.006601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.016 [2024-07-24 22:15:53.006613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.016 qpair failed and we were unable to recover it. 00:28:14.016 [2024-07-24 22:15:53.006831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.016 [2024-07-24 22:15:53.006844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.016 qpair failed and we were unable to recover it. 00:28:14.016 [2024-07-24 22:15:53.007158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.016 [2024-07-24 22:15:53.007198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.016 qpair failed and we were unable to recover it. 00:28:14.016 [2024-07-24 22:15:53.007550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.016 [2024-07-24 22:15:53.007591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.016 qpair failed and we were unable to recover it. 00:28:14.016 [2024-07-24 22:15:53.007814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.016 [2024-07-24 22:15:53.007826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.016 qpair failed and we were unable to recover it. 00:28:14.016 [2024-07-24 22:15:53.008058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.016 [2024-07-24 22:15:53.008070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.016 qpair failed and we were unable to recover it. 00:28:14.016 [2024-07-24 22:15:53.008331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.016 [2024-07-24 22:15:53.008343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.017 qpair failed and we were unable to recover it. 00:28:14.017 [2024-07-24 22:15:53.008649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.017 [2024-07-24 22:15:53.008661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.017 qpair failed and we were unable to recover it. 00:28:14.017 [2024-07-24 22:15:53.008909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.017 [2024-07-24 22:15:53.008921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.017 qpair failed and we were unable to recover it. 00:28:14.017 [2024-07-24 22:15:53.009104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.017 [2024-07-24 22:15:53.009116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.017 qpair failed and we were unable to recover it. 00:28:14.017 [2024-07-24 22:15:53.009299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.017 [2024-07-24 22:15:53.009338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.017 qpair failed and we were unable to recover it. 00:28:14.017 [2024-07-24 22:15:53.009634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.017 [2024-07-24 22:15:53.009674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.017 qpair failed and we were unable to recover it. 00:28:14.017 [2024-07-24 22:15:53.009916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1405210 is same with the state(5) to be set 00:28:14.017 [2024-07-24 22:15:53.010239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.017 [2024-07-24 22:15:53.010275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:14.017 qpair failed and we were unable to recover it. 00:28:14.017 [2024-07-24 22:15:53.010547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.017 [2024-07-24 22:15:53.010565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:14.017 qpair failed and we were unable to recover it. 00:28:14.017 [2024-07-24 22:15:53.010853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.017 [2024-07-24 22:15:53.010896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:14.017 qpair failed and we were unable to recover it. 00:28:14.017 [2024-07-24 22:15:53.011133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.017 [2024-07-24 22:15:53.011173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:14.017 qpair failed and we were unable to recover it. 00:28:14.017 [2024-07-24 22:15:53.011482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.017 [2024-07-24 22:15:53.011531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:14.017 qpair failed and we were unable to recover it. 00:28:14.017 [2024-07-24 22:15:53.011816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.017 [2024-07-24 22:15:53.011857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:14.017 qpair failed and we were unable to recover it. 00:28:14.017 [2024-07-24 22:15:53.012105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.017 [2024-07-24 22:15:53.012122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:14.017 qpair failed and we were unable to recover it. 00:28:14.017 [2024-07-24 22:15:53.012416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.017 [2024-07-24 22:15:53.012432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:14.017 qpair failed and we were unable to recover it. 00:28:14.017 [2024-07-24 22:15:53.012679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.017 [2024-07-24 22:15:53.012695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:14.017 qpair failed and we were unable to recover it. 00:28:14.017 [2024-07-24 22:15:53.012915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.017 [2024-07-24 22:15:53.012929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.017 qpair failed and we were unable to recover it. 00:28:14.017 [2024-07-24 22:15:53.013224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.017 [2024-07-24 22:15:53.013236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.017 qpair failed and we were unable to recover it. 00:28:14.017 [2024-07-24 22:15:53.013404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.017 [2024-07-24 22:15:53.013417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.017 qpair failed and we were unable to recover it. 00:28:14.017 [2024-07-24 22:15:53.013567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.017 [2024-07-24 22:15:53.013580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.017 qpair failed and we were unable to recover it. 00:28:14.017 [2024-07-24 22:15:53.013834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.017 [2024-07-24 22:15:53.013846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.017 qpair failed and we were unable to recover it. 00:28:14.017 [2024-07-24 22:15:53.014078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.017 [2024-07-24 22:15:53.014090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.017 qpair failed and we were unable to recover it. 00:28:14.017 [2024-07-24 22:15:53.014262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.017 [2024-07-24 22:15:53.014274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.017 qpair failed and we were unable to recover it. 00:28:14.017 [2024-07-24 22:15:53.014588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.017 [2024-07-24 22:15:53.014627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.017 qpair failed and we were unable to recover it. 00:28:14.017 [2024-07-24 22:15:53.014934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.017 [2024-07-24 22:15:53.014975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.017 qpair failed and we were unable to recover it. 00:28:14.017 [2024-07-24 22:15:53.015353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.017 [2024-07-24 22:15:53.015393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.017 qpair failed and we were unable to recover it. 00:28:14.017 [2024-07-24 22:15:53.015683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.017 [2024-07-24 22:15:53.015734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.017 qpair failed and we were unable to recover it. 00:28:14.017 [2024-07-24 22:15:53.015977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.017 [2024-07-24 22:15:53.016017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.017 qpair failed and we were unable to recover it. 00:28:14.017 [2024-07-24 22:15:53.016364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.017 [2024-07-24 22:15:53.016404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.017 qpair failed and we were unable to recover it. 00:28:14.017 [2024-07-24 22:15:53.016737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.017 [2024-07-24 22:15:53.016777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.017 qpair failed and we were unable to recover it. 00:28:14.017 [2024-07-24 22:15:53.017013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.017 [2024-07-24 22:15:53.017053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.017 qpair failed and we were unable to recover it. 00:28:14.017 [2024-07-24 22:15:53.017351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.017 [2024-07-24 22:15:53.017363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.017 qpair failed and we were unable to recover it. 00:28:14.017 [2024-07-24 22:15:53.017530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.017 [2024-07-24 22:15:53.017543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.017 qpair failed and we were unable to recover it. 00:28:14.017 [2024-07-24 22:15:53.017692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.017 [2024-07-24 22:15:53.017705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.017 qpair failed and we were unable to recover it. 00:28:14.017 [2024-07-24 22:15:53.017916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.017 [2024-07-24 22:15:53.017956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.017 qpair failed and we were unable to recover it. 00:28:14.017 [2024-07-24 22:15:53.018258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.017 [2024-07-24 22:15:53.018298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.017 qpair failed and we were unable to recover it. 00:28:14.017 [2024-07-24 22:15:53.018612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.017 [2024-07-24 22:15:53.018652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.017 qpair failed and we were unable to recover it. 00:28:14.017 [2024-07-24 22:15:53.019002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.017 [2024-07-24 22:15:53.019043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.017 qpair failed and we were unable to recover it. 00:28:14.017 [2024-07-24 22:15:53.019334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.018 [2024-07-24 22:15:53.019374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.018 qpair failed and we were unable to recover it. 00:28:14.018 [2024-07-24 22:15:53.019676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.018 [2024-07-24 22:15:53.019728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.018 qpair failed and we were unable to recover it. 00:28:14.018 [2024-07-24 22:15:53.020011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.018 [2024-07-24 22:15:53.020023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.018 qpair failed and we were unable to recover it. 00:28:14.018 [2024-07-24 22:15:53.020189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.018 [2024-07-24 22:15:53.020202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.018 qpair failed and we were unable to recover it. 00:28:14.018 [2024-07-24 22:15:53.020378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.018 [2024-07-24 22:15:53.020390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.018 qpair failed and we were unable to recover it. 00:28:14.018 [2024-07-24 22:15:53.020607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.018 [2024-07-24 22:15:53.020651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.018 qpair failed and we were unable to recover it. 00:28:14.018 [2024-07-24 22:15:53.021007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.018 [2024-07-24 22:15:53.021047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.018 qpair failed and we were unable to recover it. 00:28:14.018 [2024-07-24 22:15:53.021277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.018 [2024-07-24 22:15:53.021317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.018 qpair failed and we were unable to recover it. 00:28:14.018 [2024-07-24 22:15:53.021598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.018 [2024-07-24 22:15:53.021638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.018 qpair failed and we were unable to recover it. 00:28:14.018 [2024-07-24 22:15:53.021913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.018 [2024-07-24 22:15:53.021925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.018 qpair failed and we were unable to recover it. 00:28:14.018 [2024-07-24 22:15:53.022089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.018 [2024-07-24 22:15:53.022101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.018 qpair failed and we were unable to recover it. 00:28:14.018 [2024-07-24 22:15:53.022197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.018 [2024-07-24 22:15:53.022209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.018 qpair failed and we were unable to recover it. 00:28:14.018 [2024-07-24 22:15:53.022493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.018 [2024-07-24 22:15:53.022506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.018 qpair failed and we were unable to recover it. 00:28:14.018 [2024-07-24 22:15:53.022791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.018 [2024-07-24 22:15:53.022805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.018 qpair failed and we were unable to recover it. 00:28:14.018 [2024-07-24 22:15:53.022975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.018 [2024-07-24 22:15:53.022987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.018 qpair failed and we were unable to recover it. 00:28:14.018 [2024-07-24 22:15:53.023225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.018 [2024-07-24 22:15:53.023237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.018 qpair failed and we were unable to recover it. 00:28:14.018 [2024-07-24 22:15:53.023454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.018 [2024-07-24 22:15:53.023467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.018 qpair failed and we were unable to recover it. 00:28:14.018 [2024-07-24 22:15:53.023640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.018 [2024-07-24 22:15:53.023652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.018 qpair failed and we were unable to recover it. 00:28:14.018 [2024-07-24 22:15:53.023801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.018 [2024-07-24 22:15:53.023829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.018 qpair failed and we were unable to recover it. 00:28:14.018 [2024-07-24 22:15:53.024145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.018 [2024-07-24 22:15:53.024184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.018 qpair failed and we were unable to recover it. 00:28:14.018 [2024-07-24 22:15:53.024484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.018 [2024-07-24 22:15:53.024524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.018 qpair failed and we were unable to recover it. 00:28:14.018 [2024-07-24 22:15:53.024822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.018 [2024-07-24 22:15:53.024863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.018 qpair failed and we were unable to recover it. 00:28:14.018 [2024-07-24 22:15:53.025168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.018 [2024-07-24 22:15:53.025180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.018 qpair failed and we were unable to recover it. 00:28:14.018 [2024-07-24 22:15:53.025425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.018 [2024-07-24 22:15:53.025437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.018 qpair failed and we were unable to recover it. 00:28:14.018 [2024-07-24 22:15:53.025721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.018 [2024-07-24 22:15:53.025733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.018 qpair failed and we were unable to recover it. 00:28:14.018 [2024-07-24 22:15:53.025987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.018 [2024-07-24 22:15:53.026026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.018 qpair failed and we were unable to recover it. 00:28:14.018 [2024-07-24 22:15:53.026262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.018 [2024-07-24 22:15:53.026302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.018 qpair failed and we were unable to recover it. 00:28:14.018 [2024-07-24 22:15:53.026539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.018 [2024-07-24 22:15:53.026579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.018 qpair failed and we were unable to recover it. 00:28:14.018 [2024-07-24 22:15:53.026850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.018 [2024-07-24 22:15:53.026863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.018 qpair failed and we were unable to recover it. 00:28:14.018 [2024-07-24 22:15:53.027149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.018 [2024-07-24 22:15:53.027161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.018 qpair failed and we were unable to recover it. 00:28:14.018 [2024-07-24 22:15:53.027322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.018 [2024-07-24 22:15:53.027344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.018 qpair failed and we were unable to recover it. 00:28:14.018 [2024-07-24 22:15:53.027646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.018 [2024-07-24 22:15:53.027658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.018 qpair failed and we were unable to recover it. 00:28:14.018 [2024-07-24 22:15:53.027892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.018 [2024-07-24 22:15:53.027905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.018 qpair failed and we were unable to recover it. 00:28:14.018 [2024-07-24 22:15:53.028188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.018 [2024-07-24 22:15:53.028200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.018 qpair failed and we were unable to recover it. 00:28:14.018 [2024-07-24 22:15:53.028435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.018 [2024-07-24 22:15:53.028447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.018 qpair failed and we were unable to recover it. 00:28:14.018 [2024-07-24 22:15:53.028681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.018 [2024-07-24 22:15:53.028693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.018 qpair failed and we were unable to recover it. 00:28:14.018 [2024-07-24 22:15:53.028793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.018 [2024-07-24 22:15:53.028804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.018 qpair failed and we were unable to recover it. 00:28:14.018 [2024-07-24 22:15:53.028955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.018 [2024-07-24 22:15:53.028967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.019 qpair failed and we were unable to recover it. 00:28:14.019 [2024-07-24 22:15:53.029117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.019 [2024-07-24 22:15:53.029129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.019 qpair failed and we were unable to recover it. 00:28:14.019 [2024-07-24 22:15:53.029365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.019 [2024-07-24 22:15:53.029377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.019 qpair failed and we were unable to recover it. 00:28:14.019 [2024-07-24 22:15:53.029596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.019 [2024-07-24 22:15:53.029608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.019 qpair failed and we were unable to recover it. 00:28:14.019 [2024-07-24 22:15:53.029909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.019 [2024-07-24 22:15:53.029922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.019 qpair failed and we were unable to recover it. 00:28:14.019 [2024-07-24 22:15:53.030067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.019 [2024-07-24 22:15:53.030079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.019 qpair failed and we were unable to recover it. 00:28:14.019 [2024-07-24 22:15:53.030319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.019 [2024-07-24 22:15:53.030360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.019 qpair failed and we were unable to recover it. 00:28:14.019 [2024-07-24 22:15:53.030583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.019 [2024-07-24 22:15:53.030622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.019 qpair failed and we were unable to recover it. 00:28:14.019 [2024-07-24 22:15:53.030835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.019 [2024-07-24 22:15:53.030875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.019 qpair failed and we were unable to recover it. 00:28:14.019 [2024-07-24 22:15:53.031123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.019 [2024-07-24 22:15:53.031135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.019 qpair failed and we were unable to recover it. 00:28:14.019 [2024-07-24 22:15:53.031363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.019 [2024-07-24 22:15:53.031376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.019 qpair failed and we were unable to recover it. 00:28:14.019 [2024-07-24 22:15:53.031546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.019 [2024-07-24 22:15:53.031558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.019 qpair failed and we were unable to recover it. 00:28:14.019 [2024-07-24 22:15:53.031862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.019 [2024-07-24 22:15:53.031874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.019 qpair failed and we were unable to recover it. 00:28:14.019 [2024-07-24 22:15:53.032044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.019 [2024-07-24 22:15:53.032056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.019 qpair failed and we were unable to recover it. 00:28:14.019 [2024-07-24 22:15:53.032290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.019 [2024-07-24 22:15:53.032330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.019 qpair failed and we were unable to recover it. 00:28:14.019 [2024-07-24 22:15:53.032553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.019 [2024-07-24 22:15:53.032593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.019 qpair failed and we were unable to recover it. 00:28:14.019 [2024-07-24 22:15:53.032895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.019 [2024-07-24 22:15:53.032941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.019 qpair failed and we were unable to recover it. 00:28:14.019 [2024-07-24 22:15:53.033228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.019 [2024-07-24 22:15:53.033268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.019 qpair failed and we were unable to recover it. 00:28:14.019 [2024-07-24 22:15:53.033574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.019 [2024-07-24 22:15:53.033614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.019 qpair failed and we were unable to recover it. 00:28:14.019 [2024-07-24 22:15:53.033915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.019 [2024-07-24 22:15:53.033927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.019 qpair failed and we were unable to recover it. 00:28:14.019 [2024-07-24 22:15:53.034158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.019 [2024-07-24 22:15:53.034171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.019 qpair failed and we were unable to recover it. 00:28:14.019 [2024-07-24 22:15:53.034388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.019 [2024-07-24 22:15:53.034401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.019 qpair failed and we were unable to recover it. 00:28:14.019 [2024-07-24 22:15:53.034613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.019 [2024-07-24 22:15:53.034653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.019 qpair failed and we were unable to recover it. 00:28:14.019 [2024-07-24 22:15:53.035040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.019 [2024-07-24 22:15:53.035080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.019 qpair failed and we were unable to recover it. 00:28:14.019 [2024-07-24 22:15:53.035387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.019 [2024-07-24 22:15:53.035427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.019 qpair failed and we were unable to recover it. 00:28:14.019 [2024-07-24 22:15:53.035702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.019 [2024-07-24 22:15:53.035755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.019 qpair failed and we were unable to recover it. 00:28:14.019 [2024-07-24 22:15:53.036103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.019 [2024-07-24 22:15:53.036142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.019 qpair failed and we were unable to recover it. 00:28:14.019 [2024-07-24 22:15:53.036365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.019 [2024-07-24 22:15:53.036404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.019 qpair failed and we were unable to recover it. 00:28:14.019 [2024-07-24 22:15:53.036753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.019 [2024-07-24 22:15:53.036793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.019 qpair failed and we were unable to recover it. 00:28:14.019 [2024-07-24 22:15:53.037007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.019 [2024-07-24 22:15:53.037048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.019 qpair failed and we were unable to recover it. 00:28:14.019 [2024-07-24 22:15:53.037459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.019 [2024-07-24 22:15:53.037499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.019 qpair failed and we were unable to recover it. 00:28:14.019 [2024-07-24 22:15:53.037724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.019 [2024-07-24 22:15:53.037765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.019 qpair failed and we were unable to recover it. 00:28:14.020 [2024-07-24 22:15:53.038079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.020 [2024-07-24 22:15:53.038119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.020 qpair failed and we were unable to recover it. 00:28:14.020 [2024-07-24 22:15:53.038467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.020 [2024-07-24 22:15:53.038506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.020 qpair failed and we were unable to recover it. 00:28:14.020 [2024-07-24 22:15:53.038791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.020 [2024-07-24 22:15:53.038832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.020 qpair failed and we were unable to recover it. 00:28:14.020 [2024-07-24 22:15:53.039110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.020 [2024-07-24 22:15:53.039149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.020 qpair failed and we were unable to recover it. 00:28:14.020 [2024-07-24 22:15:53.039396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.020 [2024-07-24 22:15:53.039436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.020 qpair failed and we were unable to recover it. 00:28:14.020 [2024-07-24 22:15:53.039759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.020 [2024-07-24 22:15:53.039801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.020 qpair failed and we were unable to recover it. 00:28:14.020 [2024-07-24 22:15:53.040149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.020 [2024-07-24 22:15:53.040189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.020 qpair failed and we were unable to recover it. 00:28:14.020 [2024-07-24 22:15:53.040498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.020 [2024-07-24 22:15:53.040546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.020 qpair failed and we were unable to recover it. 00:28:14.020 [2024-07-24 22:15:53.040762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.020 [2024-07-24 22:15:53.040775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.020 qpair failed and we were unable to recover it. 00:28:14.020 [2024-07-24 22:15:53.041061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.020 [2024-07-24 22:15:53.041074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.020 qpair failed and we were unable to recover it. 00:28:14.020 [2024-07-24 22:15:53.041320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.020 [2024-07-24 22:15:53.041332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.020 qpair failed and we were unable to recover it. 00:28:14.020 [2024-07-24 22:15:53.041617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.020 [2024-07-24 22:15:53.041629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.020 qpair failed and we were unable to recover it. 00:28:14.020 [2024-07-24 22:15:53.041916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.020 [2024-07-24 22:15:53.041967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.020 qpair failed and we were unable to recover it. 00:28:14.020 [2024-07-24 22:15:53.042338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.020 [2024-07-24 22:15:53.042378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.020 qpair failed and we were unable to recover it. 00:28:14.020 [2024-07-24 22:15:53.042734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.020 [2024-07-24 22:15:53.042775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.020 qpair failed and we were unable to recover it. 00:28:14.020 [2024-07-24 22:15:53.043083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.020 [2024-07-24 22:15:53.043123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.020 qpair failed and we were unable to recover it. 00:28:14.020 [2024-07-24 22:15:53.043471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.020 [2024-07-24 22:15:53.043510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.020 qpair failed and we were unable to recover it. 00:28:14.020 [2024-07-24 22:15:53.043822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.020 [2024-07-24 22:15:53.043862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.020 qpair failed and we were unable to recover it. 00:28:14.020 [2024-07-24 22:15:53.044165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.020 [2024-07-24 22:15:53.044205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.020 qpair failed and we were unable to recover it. 00:28:14.020 [2024-07-24 22:15:53.044495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.020 [2024-07-24 22:15:53.044535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.020 qpair failed and we were unable to recover it. 00:28:14.020 [2024-07-24 22:15:53.044838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.020 [2024-07-24 22:15:53.044878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.020 qpair failed and we were unable to recover it. 00:28:14.020 [2024-07-24 22:15:53.045221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.020 [2024-07-24 22:15:53.045234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.020 qpair failed and we were unable to recover it. 00:28:14.020 [2024-07-24 22:15:53.045571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.020 [2024-07-24 22:15:53.045583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.020 qpair failed and we were unable to recover it. 00:28:14.020 [2024-07-24 22:15:53.045799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.020 [2024-07-24 22:15:53.045812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.020 qpair failed and we were unable to recover it. 00:28:14.020 [2024-07-24 22:15:53.045977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.020 [2024-07-24 22:15:53.045991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.020 qpair failed and we were unable to recover it. 00:28:14.020 [2024-07-24 22:15:53.046189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.020 [2024-07-24 22:15:53.046229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.020 qpair failed and we were unable to recover it. 00:28:14.020 [2024-07-24 22:15:53.046455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.020 [2024-07-24 22:15:53.046495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.020 qpair failed and we were unable to recover it. 00:28:14.020 [2024-07-24 22:15:53.046808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.020 [2024-07-24 22:15:53.046848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.020 qpair failed and we were unable to recover it. 00:28:14.020 [2024-07-24 22:15:53.047220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.020 [2024-07-24 22:15:53.047259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.020 qpair failed and we were unable to recover it. 00:28:14.020 [2024-07-24 22:15:53.047581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.020 [2024-07-24 22:15:53.047620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.020 qpair failed and we were unable to recover it. 00:28:14.020 [2024-07-24 22:15:53.047903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.020 [2024-07-24 22:15:53.047915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.020 qpair failed and we were unable to recover it. 00:28:14.020 [2024-07-24 22:15:53.048139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.020 [2024-07-24 22:15:53.048152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.020 qpair failed and we were unable to recover it. 00:28:14.020 [2024-07-24 22:15:53.048383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.020 [2024-07-24 22:15:53.048395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.020 qpair failed and we were unable to recover it. 00:28:14.020 [2024-07-24 22:15:53.048666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.020 [2024-07-24 22:15:53.048706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.020 qpair failed and we were unable to recover it. 00:28:14.020 [2024-07-24 22:15:53.049099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.020 [2024-07-24 22:15:53.049144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.020 qpair failed and we were unable to recover it. 00:28:14.020 [2024-07-24 22:15:53.049472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.020 [2024-07-24 22:15:53.049484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.020 qpair failed and we were unable to recover it. 00:28:14.020 [2024-07-24 22:15:53.049748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.021 [2024-07-24 22:15:53.049789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.021 qpair failed and we were unable to recover it. 00:28:14.021 [2024-07-24 22:15:53.050068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.021 [2024-07-24 22:15:53.050108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.021 qpair failed and we were unable to recover it. 00:28:14.021 [2024-07-24 22:15:53.050359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.021 [2024-07-24 22:15:53.050400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.021 qpair failed and we were unable to recover it. 00:28:14.021 [2024-07-24 22:15:53.050702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.021 [2024-07-24 22:15:53.050752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.021 qpair failed and we were unable to recover it. 00:28:14.021 [2024-07-24 22:15:53.050969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.021 [2024-07-24 22:15:53.051009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.021 qpair failed and we were unable to recover it. 00:28:14.021 [2024-07-24 22:15:53.051379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.021 [2024-07-24 22:15:53.051419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.021 qpair failed and we were unable to recover it. 00:28:14.021 [2024-07-24 22:15:53.051645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.021 [2024-07-24 22:15:53.051685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.021 qpair failed and we were unable to recover it. 00:28:14.021 [2024-07-24 22:15:53.051938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.021 [2024-07-24 22:15:53.051978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.021 qpair failed and we were unable to recover it. 00:28:14.021 [2024-07-24 22:15:53.052206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.021 [2024-07-24 22:15:53.052246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.021 qpair failed and we were unable to recover it. 00:28:14.021 [2024-07-24 22:15:53.052552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.021 [2024-07-24 22:15:53.052593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.021 qpair failed and we were unable to recover it. 00:28:14.021 [2024-07-24 22:15:53.052961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.021 [2024-07-24 22:15:53.053004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.021 qpair failed and we were unable to recover it. 00:28:14.021 [2024-07-24 22:15:53.053219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.021 [2024-07-24 22:15:53.053232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.021 qpair failed and we were unable to recover it. 00:28:14.021 [2024-07-24 22:15:53.053461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.021 [2024-07-24 22:15:53.053473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.021 qpair failed and we were unable to recover it. 00:28:14.021 [2024-07-24 22:15:53.053691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.021 [2024-07-24 22:15:53.053703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.021 qpair failed and we were unable to recover it. 00:28:14.021 [2024-07-24 22:15:53.053936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.021 [2024-07-24 22:15:53.053949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.021 qpair failed and we were unable to recover it. 00:28:14.021 [2024-07-24 22:15:53.054123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.021 [2024-07-24 22:15:53.054163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.021 qpair failed and we were unable to recover it. 00:28:14.021 [2024-07-24 22:15:53.054534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.021 [2024-07-24 22:15:53.054574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.021 qpair failed and we were unable to recover it. 00:28:14.021 [2024-07-24 22:15:53.054792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.021 [2024-07-24 22:15:53.054804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.021 qpair failed and we were unable to recover it. 00:28:14.021 [2024-07-24 22:15:53.055038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.021 [2024-07-24 22:15:53.055051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.021 qpair failed and we were unable to recover it. 00:28:14.021 [2024-07-24 22:15:53.055286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.021 [2024-07-24 22:15:53.055298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.021 qpair failed and we were unable to recover it. 00:28:14.021 [2024-07-24 22:15:53.055475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.021 [2024-07-24 22:15:53.055488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.021 qpair failed and we were unable to recover it. 00:28:14.021 [2024-07-24 22:15:53.055654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.021 [2024-07-24 22:15:53.055666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.021 qpair failed and we were unable to recover it. 00:28:14.021 [2024-07-24 22:15:53.055829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.021 [2024-07-24 22:15:53.055842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.021 qpair failed and we were unable to recover it. 00:28:14.021 [2024-07-24 22:15:53.056086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.021 [2024-07-24 22:15:53.056099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.021 qpair failed and we were unable to recover it. 00:28:14.021 [2024-07-24 22:15:53.056340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.021 [2024-07-24 22:15:53.056353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.021 qpair failed and we were unable to recover it. 00:28:14.021 [2024-07-24 22:15:53.056704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.021 [2024-07-24 22:15:53.056753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.021 qpair failed and we were unable to recover it. 00:28:14.021 [2024-07-24 22:15:53.057127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.021 [2024-07-24 22:15:53.057167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.021 qpair failed and we were unable to recover it. 00:28:14.021 [2024-07-24 22:15:53.057398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.021 [2024-07-24 22:15:53.057438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.021 qpair failed and we were unable to recover it. 00:28:14.021 [2024-07-24 22:15:53.057741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.021 [2024-07-24 22:15:53.057783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.021 qpair failed and we were unable to recover it. 00:28:14.021 [2024-07-24 22:15:53.058111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.021 [2024-07-24 22:15:53.058124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.021 qpair failed and we were unable to recover it. 00:28:14.021 [2024-07-24 22:15:53.058360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.021 [2024-07-24 22:15:53.058372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.021 qpair failed and we were unable to recover it. 00:28:14.021 [2024-07-24 22:15:53.058538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.021 [2024-07-24 22:15:53.058551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.021 qpair failed and we were unable to recover it. 00:28:14.021 [2024-07-24 22:15:53.058723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.021 [2024-07-24 22:15:53.058736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.021 qpair failed and we were unable to recover it. 00:28:14.021 [2024-07-24 22:15:53.058990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.021 [2024-07-24 22:15:53.059002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.021 qpair failed and we were unable to recover it. 00:28:14.021 [2024-07-24 22:15:53.059169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.021 [2024-07-24 22:15:53.059181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.021 qpair failed and we were unable to recover it. 00:28:14.021 [2024-07-24 22:15:53.059532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.021 [2024-07-24 22:15:53.059571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.021 qpair failed and we were unable to recover it. 00:28:14.021 [2024-07-24 22:15:53.059797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.021 [2024-07-24 22:15:53.059839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.021 qpair failed and we were unable to recover it. 00:28:14.021 [2024-07-24 22:15:53.060178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.022 [2024-07-24 22:15:53.060190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.022 qpair failed and we were unable to recover it. 00:28:14.022 [2024-07-24 22:15:53.060404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.022 [2024-07-24 22:15:53.060416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.022 qpair failed and we were unable to recover it. 00:28:14.022 [2024-07-24 22:15:53.060596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.022 [2024-07-24 22:15:53.060608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.022 qpair failed and we were unable to recover it. 00:28:14.022 [2024-07-24 22:15:53.060822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.022 [2024-07-24 22:15:53.060834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.022 qpair failed and we were unable to recover it. 00:28:14.022 [2024-07-24 22:15:53.061023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.022 [2024-07-24 22:15:53.061035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.022 qpair failed and we were unable to recover it. 00:28:14.022 [2024-07-24 22:15:53.061320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.022 [2024-07-24 22:15:53.061332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.022 qpair failed and we were unable to recover it. 00:28:14.022 [2024-07-24 22:15:53.061442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.022 [2024-07-24 22:15:53.061454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.022 qpair failed and we were unable to recover it. 00:28:14.022 [2024-07-24 22:15:53.061621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.022 [2024-07-24 22:15:53.061633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.022 qpair failed and we were unable to recover it. 00:28:14.022 [2024-07-24 22:15:53.061929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.022 [2024-07-24 22:15:53.061942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.022 qpair failed and we were unable to recover it. 00:28:14.022 [2024-07-24 22:15:53.062177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.022 [2024-07-24 22:15:53.062189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.022 qpair failed and we were unable to recover it. 00:28:14.022 [2024-07-24 22:15:53.062372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.022 [2024-07-24 22:15:53.062385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.022 qpair failed and we were unable to recover it. 00:28:14.022 [2024-07-24 22:15:53.062534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.022 [2024-07-24 22:15:53.062577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.022 qpair failed and we were unable to recover it. 00:28:14.022 [2024-07-24 22:15:53.062929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.022 [2024-07-24 22:15:53.062971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.022 qpair failed and we were unable to recover it. 00:28:14.022 [2024-07-24 22:15:53.063212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.022 [2024-07-24 22:15:53.063224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.022 qpair failed and we were unable to recover it. 00:28:14.022 [2024-07-24 22:15:53.063454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.022 [2024-07-24 22:15:53.063466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.022 qpair failed and we were unable to recover it. 00:28:14.022 [2024-07-24 22:15:53.063620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.022 [2024-07-24 22:15:53.063632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.022 qpair failed and we were unable to recover it. 00:28:14.022 [2024-07-24 22:15:53.063864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.022 [2024-07-24 22:15:53.063905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.022 qpair failed and we were unable to recover it. 00:28:14.022 [2024-07-24 22:15:53.064194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.022 [2024-07-24 22:15:53.064234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.022 qpair failed and we were unable to recover it. 00:28:14.022 [2024-07-24 22:15:53.064462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.022 [2024-07-24 22:15:53.064503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.022 qpair failed and we were unable to recover it. 00:28:14.022 [2024-07-24 22:15:53.064799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.022 [2024-07-24 22:15:53.064840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.022 qpair failed and we were unable to recover it. 00:28:14.022 [2024-07-24 22:15:53.065131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.022 [2024-07-24 22:15:53.065171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.022 qpair failed and we were unable to recover it. 00:28:14.022 [2024-07-24 22:15:53.065461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.022 [2024-07-24 22:15:53.065501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.022 qpair failed and we were unable to recover it. 00:28:14.022 [2024-07-24 22:15:53.065813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.022 [2024-07-24 22:15:53.065853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.022 qpair failed and we were unable to recover it. 00:28:14.022 [2024-07-24 22:15:53.066067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.022 [2024-07-24 22:15:53.066106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.022 qpair failed and we were unable to recover it. 00:28:14.022 [2024-07-24 22:15:53.066454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.022 [2024-07-24 22:15:53.066494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.022 qpair failed and we were unable to recover it. 00:28:14.022 [2024-07-24 22:15:53.066802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.022 [2024-07-24 22:15:53.066842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.022 qpair failed and we were unable to recover it. 00:28:14.022 [2024-07-24 22:15:53.067071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.022 [2024-07-24 22:15:53.067084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.022 qpair failed and we were unable to recover it. 00:28:14.022 [2024-07-24 22:15:53.067253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.022 [2024-07-24 22:15:53.067265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.022 qpair failed and we were unable to recover it. 00:28:14.022 [2024-07-24 22:15:53.067508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.022 [2024-07-24 22:15:53.067548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.022 qpair failed and we were unable to recover it. 00:28:14.022 [2024-07-24 22:15:53.067869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.022 [2024-07-24 22:15:53.067910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.022 qpair failed and we were unable to recover it. 00:28:14.022 [2024-07-24 22:15:53.068217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.022 [2024-07-24 22:15:53.068257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.022 qpair failed and we were unable to recover it. 00:28:14.022 [2024-07-24 22:15:53.068534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.022 [2024-07-24 22:15:53.068580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.022 qpair failed and we were unable to recover it. 00:28:14.022 [2024-07-24 22:15:53.068859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.022 [2024-07-24 22:15:53.068900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.022 qpair failed and we were unable to recover it. 00:28:14.022 [2024-07-24 22:15:53.069095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.022 [2024-07-24 22:15:53.069107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.022 qpair failed and we were unable to recover it. 00:28:14.022 [2024-07-24 22:15:53.069412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.022 [2024-07-24 22:15:53.069425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.022 qpair failed and we were unable to recover it. 00:28:14.022 [2024-07-24 22:15:53.069728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.022 [2024-07-24 22:15:53.069740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.022 qpair failed and we were unable to recover it. 00:28:14.022 [2024-07-24 22:15:53.069981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.022 [2024-07-24 22:15:53.069993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.023 qpair failed and we were unable to recover it. 00:28:14.023 [2024-07-24 22:15:53.070160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.023 [2024-07-24 22:15:53.070172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.023 qpair failed and we were unable to recover it. 00:28:14.023 [2024-07-24 22:15:53.070444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.023 [2024-07-24 22:15:53.070484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.023 qpair failed and we were unable to recover it. 00:28:14.023 [2024-07-24 22:15:53.070849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.023 [2024-07-24 22:15:53.070894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.023 qpair failed and we were unable to recover it. 00:28:14.023 [2024-07-24 22:15:53.071117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.023 [2024-07-24 22:15:53.071129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.023 qpair failed and we were unable to recover it. 00:28:14.023 [2024-07-24 22:15:53.071401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.023 [2024-07-24 22:15:53.071413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.023 qpair failed and we were unable to recover it. 00:28:14.023 [2024-07-24 22:15:53.071662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.023 [2024-07-24 22:15:53.071675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.023 qpair failed and we were unable to recover it. 00:28:14.023 [2024-07-24 22:15:53.071825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.023 [2024-07-24 22:15:53.071837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.023 qpair failed and we were unable to recover it. 00:28:14.023 [2024-07-24 22:15:53.071998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.023 [2024-07-24 22:15:53.072011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.023 qpair failed and we were unable to recover it. 00:28:14.023 [2024-07-24 22:15:53.072300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.023 [2024-07-24 22:15:53.072312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.023 qpair failed and we were unable to recover it. 00:28:14.023 [2024-07-24 22:15:53.072467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.023 [2024-07-24 22:15:53.072479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.023 qpair failed and we were unable to recover it. 00:28:14.023 [2024-07-24 22:15:53.072739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.023 [2024-07-24 22:15:53.072780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.023 qpair failed and we were unable to recover it. 00:28:14.023 [2024-07-24 22:15:53.073177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.023 [2024-07-24 22:15:53.073222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.023 qpair failed and we were unable to recover it. 00:28:14.023 [2024-07-24 22:15:53.073546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.023 [2024-07-24 22:15:53.073559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.023 qpair failed and we were unable to recover it. 00:28:14.023 [2024-07-24 22:15:53.073792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.023 [2024-07-24 22:15:53.073805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.023 qpair failed and we were unable to recover it. 00:28:14.023 [2024-07-24 22:15:53.074113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.023 [2024-07-24 22:15:53.074125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.023 qpair failed and we were unable to recover it. 00:28:14.023 [2024-07-24 22:15:53.074277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.023 [2024-07-24 22:15:53.074289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.023 qpair failed and we were unable to recover it. 00:28:14.023 [2024-07-24 22:15:53.074474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.023 [2024-07-24 22:15:53.074487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.023 qpair failed and we were unable to recover it. 00:28:14.023 [2024-07-24 22:15:53.074664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.023 [2024-07-24 22:15:53.074676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.023 qpair failed and we were unable to recover it. 00:28:14.023 [2024-07-24 22:15:53.074927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.023 [2024-07-24 22:15:53.074940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.023 qpair failed and we were unable to recover it. 00:28:14.023 [2024-07-24 22:15:53.075165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.023 [2024-07-24 22:15:53.075177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.023 qpair failed and we were unable to recover it. 00:28:14.023 [2024-07-24 22:15:53.075338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.023 [2024-07-24 22:15:53.075351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.023 qpair failed and we were unable to recover it. 00:28:14.023 [2024-07-24 22:15:53.075613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.023 [2024-07-24 22:15:53.075626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.023 qpair failed and we were unable to recover it. 00:28:14.023 [2024-07-24 22:15:53.075820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.023 [2024-07-24 22:15:53.075862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.023 qpair failed and we were unable to recover it. 00:28:14.023 [2024-07-24 22:15:53.076078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.023 [2024-07-24 22:15:53.076119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.023 qpair failed and we were unable to recover it. 00:28:14.023 [2024-07-24 22:15:53.076337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.023 [2024-07-24 22:15:53.076377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.023 qpair failed and we were unable to recover it. 00:28:14.023 [2024-07-24 22:15:53.076671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.023 [2024-07-24 22:15:53.076711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.023 qpair failed and we were unable to recover it. 00:28:14.023 [2024-07-24 22:15:53.077093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.023 [2024-07-24 22:15:53.077134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.023 qpair failed and we were unable to recover it. 00:28:14.023 [2024-07-24 22:15:53.077394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.023 [2024-07-24 22:15:53.077434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.023 qpair failed and we were unable to recover it. 00:28:14.023 [2024-07-24 22:15:53.077673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.023 [2024-07-24 22:15:53.077713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.023 qpair failed and we were unable to recover it. 00:28:14.023 [2024-07-24 22:15:53.078072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.023 [2024-07-24 22:15:53.078113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.023 qpair failed and we were unable to recover it. 00:28:14.023 [2024-07-24 22:15:53.078392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.023 [2024-07-24 22:15:53.078432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.023 qpair failed and we were unable to recover it. 00:28:14.023 [2024-07-24 22:15:53.078759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.023 [2024-07-24 22:15:53.078801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.023 qpair failed and we were unable to recover it. 00:28:14.023 [2024-07-24 22:15:53.079021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.023 [2024-07-24 22:15:53.079033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.023 qpair failed and we were unable to recover it. 00:28:14.023 [2024-07-24 22:15:53.079201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.023 [2024-07-24 22:15:53.079213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.023 qpair failed and we were unable to recover it. 00:28:14.023 [2024-07-24 22:15:53.079467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.024 [2024-07-24 22:15:53.079481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.024 qpair failed and we were unable to recover it. 00:28:14.024 [2024-07-24 22:15:53.079701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.024 [2024-07-24 22:15:53.079713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.024 qpair failed and we were unable to recover it. 00:28:14.024 [2024-07-24 22:15:53.079908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.024 [2024-07-24 22:15:53.079920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.024 qpair failed and we were unable to recover it. 00:28:14.024 [2024-07-24 22:15:53.080148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.024 [2024-07-24 22:15:53.080160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.024 qpair failed and we were unable to recover it. 00:28:14.024 [2024-07-24 22:15:53.080379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.024 [2024-07-24 22:15:53.080391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.024 qpair failed and we were unable to recover it. 00:28:14.024 [2024-07-24 22:15:53.080572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.024 [2024-07-24 22:15:53.080584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.024 qpair failed and we were unable to recover it. 00:28:14.024 [2024-07-24 22:15:53.080761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.024 [2024-07-24 22:15:53.080773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.024 qpair failed and we were unable to recover it. 00:28:14.024 [2024-07-24 22:15:53.080999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.024 [2024-07-24 22:15:53.081011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.024 qpair failed and we were unable to recover it. 00:28:14.024 [2024-07-24 22:15:53.081190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.024 [2024-07-24 22:15:53.081202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.024 qpair failed and we were unable to recover it. 00:28:14.024 [2024-07-24 22:15:53.081301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.024 [2024-07-24 22:15:53.081313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.024 qpair failed and we were unable to recover it. 00:28:14.024 [2024-07-24 22:15:53.081596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.024 [2024-07-24 22:15:53.081608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.024 qpair failed and we were unable to recover it. 00:28:14.024 [2024-07-24 22:15:53.081844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.024 [2024-07-24 22:15:53.081856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.024 qpair failed and we were unable to recover it. 00:28:14.024 [2024-07-24 22:15:53.082088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.024 [2024-07-24 22:15:53.082100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.024 qpair failed and we were unable to recover it. 00:28:14.024 [2024-07-24 22:15:53.082335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.024 [2024-07-24 22:15:53.082347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.024 qpair failed and we were unable to recover it. 00:28:14.024 [2024-07-24 22:15:53.082651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.024 [2024-07-24 22:15:53.082664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.024 qpair failed and we were unable to recover it. 00:28:14.024 [2024-07-24 22:15:53.082893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.024 [2024-07-24 22:15:53.082905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.024 qpair failed and we were unable to recover it. 00:28:14.024 [2024-07-24 22:15:53.083136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.024 [2024-07-24 22:15:53.083148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.024 qpair failed and we were unable to recover it. 00:28:14.024 [2024-07-24 22:15:53.083434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.024 [2024-07-24 22:15:53.083446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.024 qpair failed and we were unable to recover it. 00:28:14.024 [2024-07-24 22:15:53.083621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.024 [2024-07-24 22:15:53.083633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.024 qpair failed and we were unable to recover it. 00:28:14.024 [2024-07-24 22:15:53.083877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.024 [2024-07-24 22:15:53.083890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.024 qpair failed and we were unable to recover it. 00:28:14.024 [2024-07-24 22:15:53.084055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.024 [2024-07-24 22:15:53.084067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.024 qpair failed and we were unable to recover it. 00:28:14.024 [2024-07-24 22:15:53.084224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.024 [2024-07-24 22:15:53.084236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.024 qpair failed and we were unable to recover it. 00:28:14.024 [2024-07-24 22:15:53.084465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.024 [2024-07-24 22:15:53.084477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.024 qpair failed and we were unable to recover it. 00:28:14.024 [2024-07-24 22:15:53.084697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.024 [2024-07-24 22:15:53.084710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.024 qpair failed and we were unable to recover it. 00:28:14.024 [2024-07-24 22:15:53.084950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.024 [2024-07-24 22:15:53.084961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.024 qpair failed and we were unable to recover it. 00:28:14.024 [2024-07-24 22:15:53.085196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.024 [2024-07-24 22:15:53.085208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.024 qpair failed and we were unable to recover it. 00:28:14.024 [2024-07-24 22:15:53.085438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.024 [2024-07-24 22:15:53.085450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.024 qpair failed and we were unable to recover it. 00:28:14.024 [2024-07-24 22:15:53.085684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.024 [2024-07-24 22:15:53.085696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.024 qpair failed and we were unable to recover it. 00:28:14.024 [2024-07-24 22:15:53.085864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.024 [2024-07-24 22:15:53.085876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.024 qpair failed and we were unable to recover it. 00:28:14.024 [2024-07-24 22:15:53.086098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.024 [2024-07-24 22:15:53.086110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.024 qpair failed and we were unable to recover it. 00:28:14.024 [2024-07-24 22:15:53.086393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.024 [2024-07-24 22:15:53.086405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.024 qpair failed and we were unable to recover it. 00:28:14.024 [2024-07-24 22:15:53.086661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.024 [2024-07-24 22:15:53.086673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.024 qpair failed and we were unable to recover it. 00:28:14.024 [2024-07-24 22:15:53.086834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.024 [2024-07-24 22:15:53.086846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.024 qpair failed and we were unable to recover it. 00:28:14.024 [2024-07-24 22:15:53.087086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.024 [2024-07-24 22:15:53.087098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.024 qpair failed and we were unable to recover it. 00:28:14.024 [2024-07-24 22:15:53.087328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.025 [2024-07-24 22:15:53.087340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.025 qpair failed and we were unable to recover it. 00:28:14.025 [2024-07-24 22:15:53.087558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.025 [2024-07-24 22:15:53.087571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.025 qpair failed and we were unable to recover it. 00:28:14.025 [2024-07-24 22:15:53.087854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.025 [2024-07-24 22:15:53.087866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.025 qpair failed and we were unable to recover it. 00:28:14.025 [2024-07-24 22:15:53.088098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.025 [2024-07-24 22:15:53.088109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.025 qpair failed and we were unable to recover it. 00:28:14.025 [2024-07-24 22:15:53.088325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.025 [2024-07-24 22:15:53.088337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.025 qpair failed and we were unable to recover it. 00:28:14.025 [2024-07-24 22:15:53.088583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.025 [2024-07-24 22:15:53.088595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.025 qpair failed and we were unable to recover it. 00:28:14.025 [2024-07-24 22:15:53.088758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.025 [2024-07-24 22:15:53.088772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.025 qpair failed and we were unable to recover it. 00:28:14.025 [2024-07-24 22:15:53.089011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.025 [2024-07-24 22:15:53.089023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.025 qpair failed and we were unable to recover it. 00:28:14.025 [2024-07-24 22:15:53.089237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.025 [2024-07-24 22:15:53.089249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.025 qpair failed and we were unable to recover it. 00:28:14.025 [2024-07-24 22:15:53.089465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.025 [2024-07-24 22:15:53.089478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.025 qpair failed and we were unable to recover it. 00:28:14.025 [2024-07-24 22:15:53.089703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.025 [2024-07-24 22:15:53.089718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.025 qpair failed and we were unable to recover it. 00:28:14.025 [2024-07-24 22:15:53.089897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.025 [2024-07-24 22:15:53.089908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.025 qpair failed and we were unable to recover it. 00:28:14.025 [2024-07-24 22:15:53.090238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.025 [2024-07-24 22:15:53.090250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.025 qpair failed and we were unable to recover it. 00:28:14.025 [2024-07-24 22:15:53.090468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.025 [2024-07-24 22:15:53.090481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.025 qpair failed and we were unable to recover it. 00:28:14.025 [2024-07-24 22:15:53.090779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.025 [2024-07-24 22:15:53.090791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.025 qpair failed and we were unable to recover it. 00:28:14.025 [2024-07-24 22:15:53.090962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.025 [2024-07-24 22:15:53.090974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.025 qpair failed and we were unable to recover it. 00:28:14.025 [2024-07-24 22:15:53.091239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.025 [2024-07-24 22:15:53.091252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.025 qpair failed and we were unable to recover it. 00:28:14.025 [2024-07-24 22:15:53.091438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.025 [2024-07-24 22:15:53.091450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.025 qpair failed and we were unable to recover it. 00:28:14.025 [2024-07-24 22:15:53.091684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.025 [2024-07-24 22:15:53.091697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.025 qpair failed and we were unable to recover it. 00:28:14.025 [2024-07-24 22:15:53.091822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.025 [2024-07-24 22:15:53.091834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.025 qpair failed and we were unable to recover it. 00:28:14.025 [2024-07-24 22:15:53.092143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.025 [2024-07-24 22:15:53.092155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.025 qpair failed and we were unable to recover it. 00:28:14.025 [2024-07-24 22:15:53.092304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.025 [2024-07-24 22:15:53.092316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.025 qpair failed and we were unable to recover it. 00:28:14.025 [2024-07-24 22:15:53.092557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.025 [2024-07-24 22:15:53.092569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.025 qpair failed and we were unable to recover it. 00:28:14.025 [2024-07-24 22:15:53.092809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.025 [2024-07-24 22:15:53.092821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.025 qpair failed and we were unable to recover it. 00:28:14.025 [2024-07-24 22:15:53.093080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.025 [2024-07-24 22:15:53.093092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.025 qpair failed and we were unable to recover it. 00:28:14.025 [2024-07-24 22:15:53.093346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.025 [2024-07-24 22:15:53.093358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.025 qpair failed and we were unable to recover it. 00:28:14.025 [2024-07-24 22:15:53.093518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.025 [2024-07-24 22:15:53.093530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.025 qpair failed and we were unable to recover it. 00:28:14.025 [2024-07-24 22:15:53.093718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.025 [2024-07-24 22:15:53.093730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.025 qpair failed and we were unable to recover it. 00:28:14.025 [2024-07-24 22:15:53.093965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.025 [2024-07-24 22:15:53.093977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.025 qpair failed and we were unable to recover it. 00:28:14.025 [2024-07-24 22:15:53.094289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.025 [2024-07-24 22:15:53.094301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.025 qpair failed and we were unable to recover it. 00:28:14.025 [2024-07-24 22:15:53.094572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.025 [2024-07-24 22:15:53.094584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.025 qpair failed and we were unable to recover it. 00:28:14.025 [2024-07-24 22:15:53.094802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.025 [2024-07-24 22:15:53.094814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.025 qpair failed and we were unable to recover it. 00:28:14.025 [2024-07-24 22:15:53.094980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.025 [2024-07-24 22:15:53.094992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.025 qpair failed and we were unable to recover it. 00:28:14.025 [2024-07-24 22:15:53.095158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.025 [2024-07-24 22:15:53.095171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.025 qpair failed and we were unable to recover it. 00:28:14.025 [2024-07-24 22:15:53.095389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.025 [2024-07-24 22:15:53.095401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.025 qpair failed and we were unable to recover it. 00:28:14.025 [2024-07-24 22:15:53.095704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.025 [2024-07-24 22:15:53.095720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.025 qpair failed and we were unable to recover it. 00:28:14.025 [2024-07-24 22:15:53.095986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.025 [2024-07-24 22:15:53.095998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.025 qpair failed and we were unable to recover it. 00:28:14.025 [2024-07-24 22:15:53.096225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.025 [2024-07-24 22:15:53.096237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.026 qpair failed and we were unable to recover it. 00:28:14.026 [2024-07-24 22:15:53.096333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.026 [2024-07-24 22:15:53.096345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.026 qpair failed and we were unable to recover it. 00:28:14.026 [2024-07-24 22:15:53.096648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.026 [2024-07-24 22:15:53.096660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.026 qpair failed and we were unable to recover it. 00:28:14.026 [2024-07-24 22:15:53.096892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.026 [2024-07-24 22:15:53.096904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.026 qpair failed and we were unable to recover it. 00:28:14.026 [2024-07-24 22:15:53.097210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.026 [2024-07-24 22:15:53.097222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.026 qpair failed and we were unable to recover it. 00:28:14.026 [2024-07-24 22:15:53.097520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.026 [2024-07-24 22:15:53.097531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.026 qpair failed and we were unable to recover it. 00:28:14.026 [2024-07-24 22:15:53.097750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.026 [2024-07-24 22:15:53.097763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.026 qpair failed and we were unable to recover it. 00:28:14.026 [2024-07-24 22:15:53.098000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.026 [2024-07-24 22:15:53.098012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.026 qpair failed and we were unable to recover it. 00:28:14.026 [2024-07-24 22:15:53.098227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.026 [2024-07-24 22:15:53.098240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.026 qpair failed and we were unable to recover it. 00:28:14.026 [2024-07-24 22:15:53.098494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.026 [2024-07-24 22:15:53.098507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.026 qpair failed and we were unable to recover it. 00:28:14.026 [2024-07-24 22:15:53.098767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.026 [2024-07-24 22:15:53.098780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.026 qpair failed and we were unable to recover it. 00:28:14.026 [2024-07-24 22:15:53.099012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.026 [2024-07-24 22:15:53.099024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.026 qpair failed and we were unable to recover it. 00:28:14.026 [2024-07-24 22:15:53.099241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.026 [2024-07-24 22:15:53.099253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.026 qpair failed and we were unable to recover it. 00:28:14.026 [2024-07-24 22:15:53.099482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.026 [2024-07-24 22:15:53.099494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.026 qpair failed and we were unable to recover it. 00:28:14.026 [2024-07-24 22:15:53.099654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.026 [2024-07-24 22:15:53.099666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.026 qpair failed and we were unable to recover it. 00:28:14.026 [2024-07-24 22:15:53.099919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.026 [2024-07-24 22:15:53.099931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.026 qpair failed and we were unable to recover it. 00:28:14.026 [2024-07-24 22:15:53.100238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.026 [2024-07-24 22:15:53.100250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.026 qpair failed and we were unable to recover it. 00:28:14.026 [2024-07-24 22:15:53.100559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.026 [2024-07-24 22:15:53.100571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.026 qpair failed and we were unable to recover it. 00:28:14.026 [2024-07-24 22:15:53.100880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.026 [2024-07-24 22:15:53.100892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.026 qpair failed and we were unable to recover it. 00:28:14.026 [2024-07-24 22:15:53.101124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.026 [2024-07-24 22:15:53.101136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.026 qpair failed and we were unable to recover it. 00:28:14.026 [2024-07-24 22:15:53.101349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.026 [2024-07-24 22:15:53.101361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.026 qpair failed and we were unable to recover it. 00:28:14.026 [2024-07-24 22:15:53.101595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.026 [2024-07-24 22:15:53.101607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.026 qpair failed and we were unable to recover it. 00:28:14.026 [2024-07-24 22:15:53.101758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.026 [2024-07-24 22:15:53.101770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.026 qpair failed and we were unable to recover it. 00:28:14.026 [2024-07-24 22:15:53.102028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.026 [2024-07-24 22:15:53.102040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.026 qpair failed and we were unable to recover it. 00:28:14.026 [2024-07-24 22:15:53.102293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.026 [2024-07-24 22:15:53.102305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.026 qpair failed and we were unable to recover it. 00:28:14.026 [2024-07-24 22:15:53.102563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.026 [2024-07-24 22:15:53.102575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.026 qpair failed and we were unable to recover it. 00:28:14.026 [2024-07-24 22:15:53.102789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.026 [2024-07-24 22:15:53.102802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.026 qpair failed and we were unable to recover it. 00:28:14.026 [2024-07-24 22:15:53.103043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.026 [2024-07-24 22:15:53.103055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.026 qpair failed and we were unable to recover it. 00:28:14.026 [2024-07-24 22:15:53.103276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.026 [2024-07-24 22:15:53.103288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.026 qpair failed and we were unable to recover it. 00:28:14.026 [2024-07-24 22:15:53.103537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.026 [2024-07-24 22:15:53.103549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.026 qpair failed and we were unable to recover it. 00:28:14.026 [2024-07-24 22:15:53.103728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.026 [2024-07-24 22:15:53.103741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.026 qpair failed and we were unable to recover it. 00:28:14.026 [2024-07-24 22:15:53.104024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.026 [2024-07-24 22:15:53.104036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.026 qpair failed and we were unable to recover it. 00:28:14.026 [2024-07-24 22:15:53.104338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.026 [2024-07-24 22:15:53.104351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.026 qpair failed and we were unable to recover it. 00:28:14.026 [2024-07-24 22:15:53.104634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.026 [2024-07-24 22:15:53.104646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.026 qpair failed and we were unable to recover it. 00:28:14.026 [2024-07-24 22:15:53.104956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.026 [2024-07-24 22:15:53.104968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.026 qpair failed and we were unable to recover it. 00:28:14.026 [2024-07-24 22:15:53.105117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.026 [2024-07-24 22:15:53.105129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.026 qpair failed and we were unable to recover it. 00:28:14.026 [2024-07-24 22:15:53.105442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.026 [2024-07-24 22:15:53.105455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.026 qpair failed and we were unable to recover it. 00:28:14.026 [2024-07-24 22:15:53.105672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.026 [2024-07-24 22:15:53.105684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.026 qpair failed and we were unable to recover it. 00:28:14.027 [2024-07-24 22:15:53.105873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.027 [2024-07-24 22:15:53.105885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.027 qpair failed and we were unable to recover it. 00:28:14.027 [2024-07-24 22:15:53.106112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.027 [2024-07-24 22:15:53.106124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.027 qpair failed and we were unable to recover it. 00:28:14.027 [2024-07-24 22:15:53.106454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.027 [2024-07-24 22:15:53.106466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.027 qpair failed and we were unable to recover it. 00:28:14.027 [2024-07-24 22:15:53.106633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.027 [2024-07-24 22:15:53.106645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.027 qpair failed and we were unable to recover it. 00:28:14.027 [2024-07-24 22:15:53.106871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.027 [2024-07-24 22:15:53.106883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.027 qpair failed and we were unable to recover it. 00:28:14.027 [2024-07-24 22:15:53.106983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.027 [2024-07-24 22:15:53.106995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.027 qpair failed and we were unable to recover it. 00:28:14.027 [2024-07-24 22:15:53.107141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.027 [2024-07-24 22:15:53.107153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.027 qpair failed and we were unable to recover it. 00:28:14.027 [2024-07-24 22:15:53.107438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.027 [2024-07-24 22:15:53.107450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.027 qpair failed and we were unable to recover it. 00:28:14.027 [2024-07-24 22:15:53.107631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.027 [2024-07-24 22:15:53.107643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.027 qpair failed and we were unable to recover it. 00:28:14.027 [2024-07-24 22:15:53.107893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.027 [2024-07-24 22:15:53.107906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.027 qpair failed and we were unable to recover it. 00:28:14.027 [2024-07-24 22:15:53.107993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.027 [2024-07-24 22:15:53.108005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.027 qpair failed and we were unable to recover it. 00:28:14.027 [2024-07-24 22:15:53.108223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.027 [2024-07-24 22:15:53.108239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.027 qpair failed and we were unable to recover it. 00:28:14.027 [2024-07-24 22:15:53.108390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.027 [2024-07-24 22:15:53.108402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.027 qpair failed and we were unable to recover it. 00:28:14.027 [2024-07-24 22:15:53.108635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.027 [2024-07-24 22:15:53.108647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.027 qpair failed and we were unable to recover it. 00:28:14.027 [2024-07-24 22:15:53.108954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.027 [2024-07-24 22:15:53.108967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.027 qpair failed and we were unable to recover it. 00:28:14.027 [2024-07-24 22:15:53.109257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.027 [2024-07-24 22:15:53.109269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.027 qpair failed and we were unable to recover it. 00:28:14.027 [2024-07-24 22:15:53.109444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.027 [2024-07-24 22:15:53.109457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.027 qpair failed and we were unable to recover it. 00:28:14.027 [2024-07-24 22:15:53.109642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.027 [2024-07-24 22:15:53.109653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.027 qpair failed and we were unable to recover it. 00:28:14.027 [2024-07-24 22:15:53.109936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.027 [2024-07-24 22:15:53.109948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.027 qpair failed and we were unable to recover it. 00:28:14.027 [2024-07-24 22:15:53.110265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.027 [2024-07-24 22:15:53.110306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.027 qpair failed and we were unable to recover it. 00:28:14.027 [2024-07-24 22:15:53.110609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.027 [2024-07-24 22:15:53.110648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.027 qpair failed and we were unable to recover it. 00:28:14.027 [2024-07-24 22:15:53.110939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.027 [2024-07-24 22:15:53.110980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.027 qpair failed and we were unable to recover it. 00:28:14.027 [2024-07-24 22:15:53.111352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.027 [2024-07-24 22:15:53.111393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.027 qpair failed and we were unable to recover it. 00:28:14.027 [2024-07-24 22:15:53.111760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.027 [2024-07-24 22:15:53.111801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.027 qpair failed and we were unable to recover it. 00:28:14.027 [2024-07-24 22:15:53.112160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.027 [2024-07-24 22:15:53.112201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.027 qpair failed and we were unable to recover it. 00:28:14.027 [2024-07-24 22:15:53.112611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.027 [2024-07-24 22:15:53.112651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.027 qpair failed and we were unable to recover it. 00:28:14.027 [2024-07-24 22:15:53.112951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.027 [2024-07-24 22:15:53.112991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.027 qpair failed and we were unable to recover it. 00:28:14.027 [2024-07-24 22:15:53.113241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.027 [2024-07-24 22:15:53.113253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.027 qpair failed and we were unable to recover it. 00:28:14.027 [2024-07-24 22:15:53.113548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.027 [2024-07-24 22:15:53.113560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.027 qpair failed and we were unable to recover it. 00:28:14.027 [2024-07-24 22:15:53.113808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.027 [2024-07-24 22:15:53.113821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.027 qpair failed and we were unable to recover it. 00:28:14.027 [2024-07-24 22:15:53.114059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.027 [2024-07-24 22:15:53.114072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.027 qpair failed and we were unable to recover it. 00:28:14.027 [2024-07-24 22:15:53.114366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.027 [2024-07-24 22:15:53.114411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.027 qpair failed and we were unable to recover it. 00:28:14.027 [2024-07-24 22:15:53.114659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.027 [2024-07-24 22:15:53.114699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.027 qpair failed and we were unable to recover it. 00:28:14.027 [2024-07-24 22:15:53.114983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.027 [2024-07-24 22:15:53.115023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.027 qpair failed and we were unable to recover it. 00:28:14.027 [2024-07-24 22:15:53.115376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.027 [2024-07-24 22:15:53.115416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.027 qpair failed and we were unable to recover it. 00:28:14.027 [2024-07-24 22:15:53.115691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.027 [2024-07-24 22:15:53.115744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.027 qpair failed and we were unable to recover it. 00:28:14.027 [2024-07-24 22:15:53.115957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.027 [2024-07-24 22:15:53.115997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.028 qpair failed and we were unable to recover it. 00:28:14.028 [2024-07-24 22:15:53.116364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.028 [2024-07-24 22:15:53.116386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.028 qpair failed and we were unable to recover it. 00:28:14.028 [2024-07-24 22:15:53.116632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.028 [2024-07-24 22:15:53.116678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.028 qpair failed and we were unable to recover it. 00:28:14.028 [2024-07-24 22:15:53.116987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.028 [2024-07-24 22:15:53.117027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.028 qpair failed and we were unable to recover it. 00:28:14.028 [2024-07-24 22:15:53.117322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.028 [2024-07-24 22:15:53.117334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.028 qpair failed and we were unable to recover it. 00:28:14.028 [2024-07-24 22:15:53.117647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.028 [2024-07-24 22:15:53.117686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.028 qpair failed and we were unable to recover it. 00:28:14.028 [2024-07-24 22:15:53.118060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.028 [2024-07-24 22:15:53.118101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.028 qpair failed and we were unable to recover it. 00:28:14.028 [2024-07-24 22:15:53.118403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.028 [2024-07-24 22:15:53.118414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.028 qpair failed and we were unable to recover it. 00:28:14.028 [2024-07-24 22:15:53.118678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.028 [2024-07-24 22:15:53.118690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.028 qpair failed and we were unable to recover it. 00:28:14.028 [2024-07-24 22:15:53.118939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.028 [2024-07-24 22:15:53.118980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.028 qpair failed and we were unable to recover it. 00:28:14.028 [2024-07-24 22:15:53.119259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.028 [2024-07-24 22:15:53.119306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.028 qpair failed and we were unable to recover it. 00:28:14.028 [2024-07-24 22:15:53.119590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.028 [2024-07-24 22:15:53.119602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.028 qpair failed and we were unable to recover it. 00:28:14.028 [2024-07-24 22:15:53.119815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.028 [2024-07-24 22:15:53.119827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.028 qpair failed and we were unable to recover it. 00:28:14.028 [2024-07-24 22:15:53.120057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.028 [2024-07-24 22:15:53.120068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.028 qpair failed and we were unable to recover it. 00:28:14.028 [2024-07-24 22:15:53.120290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.028 [2024-07-24 22:15:53.120302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.028 qpair failed and we were unable to recover it. 00:28:14.028 [2024-07-24 22:15:53.120520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.028 [2024-07-24 22:15:53.120532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.028 qpair failed and we were unable to recover it. 00:28:14.028 [2024-07-24 22:15:53.120841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.028 [2024-07-24 22:15:53.120883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.028 qpair failed and we were unable to recover it. 00:28:14.028 [2024-07-24 22:15:53.121127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.028 [2024-07-24 22:15:53.121167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.028 qpair failed and we were unable to recover it. 00:28:14.028 [2024-07-24 22:15:53.121448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.028 [2024-07-24 22:15:53.121487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.028 qpair failed and we were unable to recover it. 00:28:14.028 [2024-07-24 22:15:53.121791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.028 [2024-07-24 22:15:53.121832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.028 qpair failed and we were unable to recover it. 00:28:14.028 [2024-07-24 22:15:53.122163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.028 [2024-07-24 22:15:53.122203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.028 qpair failed and we were unable to recover it. 00:28:14.028 [2024-07-24 22:15:53.122420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.028 [2024-07-24 22:15:53.122459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.028 qpair failed and we were unable to recover it. 00:28:14.028 [2024-07-24 22:15:53.122699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.028 [2024-07-24 22:15:53.122751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.028 qpair failed and we were unable to recover it. 00:28:14.028 [2024-07-24 22:15:53.123053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.028 [2024-07-24 22:15:53.123065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.028 qpair failed and we were unable to recover it. 00:28:14.028 [2024-07-24 22:15:53.123280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.028 [2024-07-24 22:15:53.123292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.028 qpair failed and we were unable to recover it. 00:28:14.028 [2024-07-24 22:15:53.123461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.028 [2024-07-24 22:15:53.123473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.028 qpair failed and we were unable to recover it. 00:28:14.028 [2024-07-24 22:15:53.123623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.028 [2024-07-24 22:15:53.123634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.028 qpair failed and we were unable to recover it. 00:28:14.028 [2024-07-24 22:15:53.123888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.028 [2024-07-24 22:15:53.123900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.028 qpair failed and we were unable to recover it. 00:28:14.028 [2024-07-24 22:15:53.124099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.028 [2024-07-24 22:15:53.124140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.028 qpair failed and we were unable to recover it. 00:28:14.028 [2024-07-24 22:15:53.124428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.028 [2024-07-24 22:15:53.124468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.028 qpair failed and we were unable to recover it. 00:28:14.028 [2024-07-24 22:15:53.124748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.028 [2024-07-24 22:15:53.124789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.028 qpair failed and we were unable to recover it. 00:28:14.028 [2024-07-24 22:15:53.125075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.028 [2024-07-24 22:15:53.125115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.028 qpair failed and we were unable to recover it. 00:28:14.028 [2024-07-24 22:15:53.125408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.028 [2024-07-24 22:15:53.125448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.028 qpair failed and we were unable to recover it. 00:28:14.028 [2024-07-24 22:15:53.125755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.029 [2024-07-24 22:15:53.125796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.029 qpair failed and we were unable to recover it. 00:28:14.029 [2024-07-24 22:15:53.126030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.029 [2024-07-24 22:15:53.126070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.029 qpair failed and we were unable to recover it. 00:28:14.029 [2024-07-24 22:15:53.126300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.029 [2024-07-24 22:15:53.126340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.029 qpair failed and we were unable to recover it. 00:28:14.029 [2024-07-24 22:15:53.126572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.029 [2024-07-24 22:15:53.126612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.029 qpair failed and we were unable to recover it. 00:28:14.029 [2024-07-24 22:15:53.126904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.029 [2024-07-24 22:15:53.126945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.029 qpair failed and we were unable to recover it. 00:28:14.029 [2024-07-24 22:15:53.127229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.029 [2024-07-24 22:15:53.127269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.029 qpair failed and we were unable to recover it. 00:28:14.029 [2024-07-24 22:15:53.127489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.029 [2024-07-24 22:15:53.127529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.029 qpair failed and we were unable to recover it. 00:28:14.029 [2024-07-24 22:15:53.127834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.029 [2024-07-24 22:15:53.127876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.029 qpair failed and we were unable to recover it. 00:28:14.029 [2024-07-24 22:15:53.128101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.029 [2024-07-24 22:15:53.128141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.029 qpair failed and we were unable to recover it. 00:28:14.029 [2024-07-24 22:15:53.128370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.029 [2024-07-24 22:15:53.128415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.029 qpair failed and we were unable to recover it. 00:28:14.029 [2024-07-24 22:15:53.128697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.029 [2024-07-24 22:15:53.128747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.029 qpair failed and we were unable to recover it. 00:28:14.029 [2024-07-24 22:15:53.128963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.029 [2024-07-24 22:15:53.129003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.029 qpair failed and we were unable to recover it. 00:28:14.029 [2024-07-24 22:15:53.129258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.029 [2024-07-24 22:15:53.129270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.029 qpair failed and we were unable to recover it. 00:28:14.029 [2024-07-24 22:15:53.129491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.029 [2024-07-24 22:15:53.129503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.029 qpair failed and we were unable to recover it. 00:28:14.029 [2024-07-24 22:15:53.129758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.029 [2024-07-24 22:15:53.129771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.029 qpair failed and we were unable to recover it. 00:28:14.029 [2024-07-24 22:15:53.130060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.029 [2024-07-24 22:15:53.130100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.029 qpair failed and we were unable to recover it. 00:28:14.029 [2024-07-24 22:15:53.130344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.029 [2024-07-24 22:15:53.130379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.029 qpair failed and we were unable to recover it. 00:28:14.029 [2024-07-24 22:15:53.130537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.029 [2024-07-24 22:15:53.130549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.029 qpair failed and we were unable to recover it. 00:28:14.029 [2024-07-24 22:15:53.130767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.029 [2024-07-24 22:15:53.130779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.029 qpair failed and we were unable to recover it. 00:28:14.029 [2024-07-24 22:15:53.131013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.029 [2024-07-24 22:15:53.131053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.029 qpair failed and we were unable to recover it. 00:28:14.029 [2024-07-24 22:15:53.131333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.029 [2024-07-24 22:15:53.131373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.029 qpair failed and we were unable to recover it. 00:28:14.029 [2024-07-24 22:15:53.131669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.029 [2024-07-24 22:15:53.131709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.029 qpair failed and we were unable to recover it. 00:28:14.029 [2024-07-24 22:15:53.131983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.029 [2024-07-24 22:15:53.132024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.029 qpair failed and we were unable to recover it. 00:28:14.029 [2024-07-24 22:15:53.132371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.029 [2024-07-24 22:15:53.132411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.029 qpair failed and we were unable to recover it. 00:28:14.029 [2024-07-24 22:15:53.132746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.029 [2024-07-24 22:15:53.132787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.029 qpair failed and we were unable to recover it. 00:28:14.029 [2024-07-24 22:15:53.133088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.029 [2024-07-24 22:15:53.133100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.029 qpair failed and we were unable to recover it. 00:28:14.029 [2024-07-24 22:15:53.133330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.029 [2024-07-24 22:15:53.133342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.029 qpair failed and we were unable to recover it. 00:28:14.029 [2024-07-24 22:15:53.133559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.029 [2024-07-24 22:15:53.133571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.029 qpair failed and we were unable to recover it. 00:28:14.029 [2024-07-24 22:15:53.133824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.029 [2024-07-24 22:15:53.133836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.029 qpair failed and we were unable to recover it. 00:28:14.029 [2024-07-24 22:15:53.134075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.029 [2024-07-24 22:15:53.134087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.029 qpair failed and we were unable to recover it. 00:28:14.029 [2024-07-24 22:15:53.134258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.029 [2024-07-24 22:15:53.134270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.029 qpair failed and we were unable to recover it. 00:28:14.029 [2024-07-24 22:15:53.134506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.029 [2024-07-24 22:15:53.134546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.029 qpair failed and we were unable to recover it. 00:28:14.029 [2024-07-24 22:15:53.134764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.029 [2024-07-24 22:15:53.134805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.029 qpair failed and we were unable to recover it. 00:28:14.029 [2024-07-24 22:15:53.135097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.029 [2024-07-24 22:15:53.135136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.029 qpair failed and we were unable to recover it. 00:28:14.029 [2024-07-24 22:15:53.135419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.029 [2024-07-24 22:15:53.135460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.029 qpair failed and we were unable to recover it. 00:28:14.029 [2024-07-24 22:15:53.135807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.029 [2024-07-24 22:15:53.135848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.029 qpair failed and we were unable to recover it. 00:28:14.029 [2024-07-24 22:15:53.136132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.029 [2024-07-24 22:15:53.136172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.029 qpair failed and we were unable to recover it. 00:28:14.030 [2024-07-24 22:15:53.136472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.030 [2024-07-24 22:15:53.136512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.030 qpair failed and we were unable to recover it. 00:28:14.030 [2024-07-24 22:15:53.136753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.030 [2024-07-24 22:15:53.136795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.030 qpair failed and we were unable to recover it. 00:28:14.030 [2024-07-24 22:15:53.137145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.030 [2024-07-24 22:15:53.137184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.030 qpair failed and we were unable to recover it. 00:28:14.030 [2024-07-24 22:15:53.137356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.030 [2024-07-24 22:15:53.137368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.030 qpair failed and we were unable to recover it. 00:28:14.030 [2024-07-24 22:15:53.137692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.030 [2024-07-24 22:15:53.137705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.030 qpair failed and we were unable to recover it. 00:28:14.030 [2024-07-24 22:15:53.137937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.030 [2024-07-24 22:15:53.137976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.030 qpair failed and we were unable to recover it. 00:28:14.030 [2024-07-24 22:15:53.138211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.030 [2024-07-24 22:15:53.138251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.030 qpair failed and we were unable to recover it. 00:28:14.030 [2024-07-24 22:15:53.138530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.030 [2024-07-24 22:15:53.138569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.030 qpair failed and we were unable to recover it. 00:28:14.030 [2024-07-24 22:15:53.138952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.030 [2024-07-24 22:15:53.138993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.030 qpair failed and we were unable to recover it. 00:28:14.030 [2024-07-24 22:15:53.139288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.030 [2024-07-24 22:15:53.139300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.030 qpair failed and we were unable to recover it. 00:28:14.030 [2024-07-24 22:15:53.139480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.030 [2024-07-24 22:15:53.139519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.030 qpair failed and we were unable to recover it. 00:28:14.030 [2024-07-24 22:15:53.139743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.030 [2024-07-24 22:15:53.139783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.030 qpair failed and we were unable to recover it. 00:28:14.030 [2024-07-24 22:15:53.140057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.030 [2024-07-24 22:15:53.140071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.030 qpair failed and we were unable to recover it. 00:28:14.030 [2024-07-24 22:15:53.140240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.030 [2024-07-24 22:15:53.140279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.030 qpair failed and we were unable to recover it. 00:28:14.030 [2024-07-24 22:15:53.140560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.030 [2024-07-24 22:15:53.140600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.030 qpair failed and we were unable to recover it. 00:28:14.030 [2024-07-24 22:15:53.140911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.030 [2024-07-24 22:15:53.140954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.030 qpair failed and we were unable to recover it. 00:28:14.030 [2024-07-24 22:15:53.141184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.030 [2024-07-24 22:15:53.141224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.030 qpair failed and we were unable to recover it. 00:28:14.030 [2024-07-24 22:15:53.141522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.030 [2024-07-24 22:15:53.141562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.030 qpair failed and we were unable to recover it. 00:28:14.030 [2024-07-24 22:15:53.141799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.030 [2024-07-24 22:15:53.141841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.030 qpair failed and we were unable to recover it. 00:28:14.030 [2024-07-24 22:15:53.142052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.030 [2024-07-24 22:15:53.142091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.030 qpair failed and we were unable to recover it. 00:28:14.030 [2024-07-24 22:15:53.142321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.030 [2024-07-24 22:15:53.142332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.030 qpair failed and we were unable to recover it. 00:28:14.030 [2024-07-24 22:15:53.142492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.030 [2024-07-24 22:15:53.142532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.030 qpair failed and we were unable to recover it. 00:28:14.030 [2024-07-24 22:15:53.142813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.030 [2024-07-24 22:15:53.142854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.030 qpair failed and we were unable to recover it. 00:28:14.030 [2024-07-24 22:15:53.143074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.030 [2024-07-24 22:15:53.143087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.030 qpair failed and we were unable to recover it. 00:28:14.030 [2024-07-24 22:15:53.143306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.030 [2024-07-24 22:15:53.143318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.030 qpair failed and we were unable to recover it. 00:28:14.030 [2024-07-24 22:15:53.143466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.030 [2024-07-24 22:15:53.143478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.030 qpair failed and we were unable to recover it. 00:28:14.030 [2024-07-24 22:15:53.143766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.030 [2024-07-24 22:15:53.143779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.030 qpair failed and we were unable to recover it. 00:28:14.030 [2024-07-24 22:15:53.143935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.030 [2024-07-24 22:15:53.143947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.030 qpair failed and we were unable to recover it. 00:28:14.030 [2024-07-24 22:15:53.144162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.030 [2024-07-24 22:15:53.144174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.030 qpair failed and we were unable to recover it. 00:28:14.030 [2024-07-24 22:15:53.144398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.030 [2024-07-24 22:15:53.144437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.030 qpair failed and we were unable to recover it. 00:28:14.030 [2024-07-24 22:15:53.144737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.030 [2024-07-24 22:15:53.144780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.030 qpair failed and we were unable to recover it. 00:28:14.030 [2024-07-24 22:15:53.145059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.030 [2024-07-24 22:15:53.145099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.030 qpair failed and we were unable to recover it. 00:28:14.030 [2024-07-24 22:15:53.145390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.030 [2024-07-24 22:15:53.145430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.030 qpair failed and we were unable to recover it. 00:28:14.030 [2024-07-24 22:15:53.145659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.030 [2024-07-24 22:15:53.145700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.030 qpair failed and we were unable to recover it. 00:28:14.030 [2024-07-24 22:15:53.145936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.030 [2024-07-24 22:15:53.145976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.030 qpair failed and we were unable to recover it. 00:28:14.030 [2024-07-24 22:15:53.146328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.030 [2024-07-24 22:15:53.146367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.030 qpair failed and we were unable to recover it. 00:28:14.030 [2024-07-24 22:15:53.146662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.030 [2024-07-24 22:15:53.146702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.030 qpair failed and we were unable to recover it. 00:28:14.030 [2024-07-24 22:15:53.147001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.031 [2024-07-24 22:15:53.147040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.031 qpair failed and we were unable to recover it. 00:28:14.031 [2024-07-24 22:15:53.147272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.031 [2024-07-24 22:15:53.147312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.031 qpair failed and we were unable to recover it. 00:28:14.031 [2024-07-24 22:15:53.147582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.031 [2024-07-24 22:15:53.147594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.031 qpair failed and we were unable to recover it. 00:28:14.031 [2024-07-24 22:15:53.147820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.031 [2024-07-24 22:15:53.147832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.031 qpair failed and we were unable to recover it. 00:28:14.031 [2024-07-24 22:15:53.148059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.031 [2024-07-24 22:15:53.148070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.031 qpair failed and we were unable to recover it. 00:28:14.031 [2024-07-24 22:15:53.148288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.031 [2024-07-24 22:15:53.148300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.031 qpair failed and we were unable to recover it. 00:28:14.031 [2024-07-24 22:15:53.148602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.031 [2024-07-24 22:15:53.148651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.031 qpair failed and we were unable to recover it. 00:28:14.031 [2024-07-24 22:15:53.148899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.031 [2024-07-24 22:15:53.148940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.031 qpair failed and we were unable to recover it. 00:28:14.031 [2024-07-24 22:15:53.149235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.031 [2024-07-24 22:15:53.149276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.031 qpair failed and we were unable to recover it. 00:28:14.031 [2024-07-24 22:15:53.149544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.031 [2024-07-24 22:15:53.149584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.031 qpair failed and we were unable to recover it. 00:28:14.031 [2024-07-24 22:15:53.149874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.031 [2024-07-24 22:15:53.149914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.031 qpair failed and we were unable to recover it. 00:28:14.031 [2024-07-24 22:15:53.150293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.031 [2024-07-24 22:15:53.150335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.031 qpair failed and we were unable to recover it. 00:28:14.031 [2024-07-24 22:15:53.150615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.031 [2024-07-24 22:15:53.150655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.031 qpair failed and we were unable to recover it. 00:28:14.031 [2024-07-24 22:15:53.151026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.031 [2024-07-24 22:15:53.151066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.031 qpair failed and we were unable to recover it. 00:28:14.031 [2024-07-24 22:15:53.151359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.031 [2024-07-24 22:15:53.151399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.031 qpair failed and we were unable to recover it. 00:28:14.031 [2024-07-24 22:15:53.151625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.031 [2024-07-24 22:15:53.151671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.031 qpair failed and we were unable to recover it. 00:28:14.031 [2024-07-24 22:15:53.151914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.031 [2024-07-24 22:15:53.151954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.031 qpair failed and we were unable to recover it. 00:28:14.031 [2024-07-24 22:15:53.152242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.031 [2024-07-24 22:15:53.152282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.031 qpair failed and we were unable to recover it. 00:28:14.031 [2024-07-24 22:15:53.152611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.031 [2024-07-24 22:15:53.152651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.031 qpair failed and we were unable to recover it. 00:28:14.031 [2024-07-24 22:15:53.153030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.031 [2024-07-24 22:15:53.153071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.031 qpair failed and we were unable to recover it. 00:28:14.031 [2024-07-24 22:15:53.153315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.031 [2024-07-24 22:15:53.153327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.031 qpair failed and we were unable to recover it. 00:28:14.031 [2024-07-24 22:15:53.153493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.031 [2024-07-24 22:15:53.153533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.031 qpair failed and we were unable to recover it. 00:28:14.031 [2024-07-24 22:15:53.153740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.031 [2024-07-24 22:15:53.153782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.031 qpair failed and we were unable to recover it. 00:28:14.031 [2024-07-24 22:15:53.153949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.031 [2024-07-24 22:15:53.153989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.031 qpair failed and we were unable to recover it. 00:28:14.031 [2024-07-24 22:15:53.154190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.031 [2024-07-24 22:15:53.154202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.031 qpair failed and we were unable to recover it. 00:28:14.031 [2024-07-24 22:15:53.154417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.031 [2024-07-24 22:15:53.154428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.031 qpair failed and we were unable to recover it. 00:28:14.031 [2024-07-24 22:15:53.154655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.031 [2024-07-24 22:15:53.154695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.031 qpair failed and we were unable to recover it. 00:28:14.031 [2024-07-24 22:15:53.155002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.031 [2024-07-24 22:15:53.155042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.031 qpair failed and we were unable to recover it. 00:28:14.031 [2024-07-24 22:15:53.155285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.031 [2024-07-24 22:15:53.155325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.031 qpair failed and we were unable to recover it. 00:28:14.031 [2024-07-24 22:15:53.155619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.031 [2024-07-24 22:15:53.155659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.031 qpair failed and we were unable to recover it. 00:28:14.031 [2024-07-24 22:15:53.155964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.031 [2024-07-24 22:15:53.156004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.031 qpair failed and we were unable to recover it. 00:28:14.031 [2024-07-24 22:15:53.156290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.031 [2024-07-24 22:15:53.156330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.031 qpair failed and we were unable to recover it. 00:28:14.031 [2024-07-24 22:15:53.156639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.031 [2024-07-24 22:15:53.156679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.031 qpair failed and we were unable to recover it. 00:28:14.031 [2024-07-24 22:15:53.157061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.031 [2024-07-24 22:15:53.157101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.031 qpair failed and we were unable to recover it. 00:28:14.031 [2024-07-24 22:15:53.157413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.031 [2024-07-24 22:15:53.157453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.031 qpair failed and we were unable to recover it. 00:28:14.031 [2024-07-24 22:15:53.157670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.031 [2024-07-24 22:15:53.157710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.031 qpair failed and we were unable to recover it. 00:28:14.031 [2024-07-24 22:15:53.158045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.031 [2024-07-24 22:15:53.158086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.031 qpair failed and we were unable to recover it. 00:28:14.031 [2024-07-24 22:15:53.158375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.032 [2024-07-24 22:15:53.158414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.032 qpair failed and we were unable to recover it. 00:28:14.032 [2024-07-24 22:15:53.158662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.032 [2024-07-24 22:15:53.158702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.032 qpair failed and we were unable to recover it. 00:28:14.032 [2024-07-24 22:15:53.159015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.032 [2024-07-24 22:15:53.159056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.032 qpair failed and we were unable to recover it. 00:28:14.032 [2024-07-24 22:15:53.159377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.032 [2024-07-24 22:15:53.159417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.032 qpair failed and we were unable to recover it. 00:28:14.032 [2024-07-24 22:15:53.159812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.032 [2024-07-24 22:15:53.159854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.032 qpair failed and we were unable to recover it. 00:28:14.032 [2024-07-24 22:15:53.160075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.032 [2024-07-24 22:15:53.160115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.032 qpair failed and we were unable to recover it. 00:28:14.032 [2024-07-24 22:15:53.160333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.032 [2024-07-24 22:15:53.160345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.032 qpair failed and we were unable to recover it. 00:28:14.032 [2024-07-24 22:15:53.160510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.032 [2024-07-24 22:15:53.160522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.032 qpair failed and we were unable to recover it. 00:28:14.032 [2024-07-24 22:15:53.160813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.032 [2024-07-24 22:15:53.160825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.032 qpair failed and we were unable to recover it. 00:28:14.032 [2024-07-24 22:15:53.161042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.032 [2024-07-24 22:15:53.161054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.032 qpair failed and we were unable to recover it. 00:28:14.032 [2024-07-24 22:15:53.161297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.032 [2024-07-24 22:15:53.161309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.032 qpair failed and we were unable to recover it. 00:28:14.032 [2024-07-24 22:15:53.161528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.032 [2024-07-24 22:15:53.161540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.032 qpair failed and we were unable to recover it. 00:28:14.032 [2024-07-24 22:15:53.161771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.032 [2024-07-24 22:15:53.161784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.032 qpair failed and we were unable to recover it. 00:28:14.032 [2024-07-24 22:15:53.161947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.032 [2024-07-24 22:15:53.161959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.032 qpair failed and we were unable to recover it. 00:28:14.032 [2024-07-24 22:15:53.162185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.032 [2024-07-24 22:15:53.162197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.032 qpair failed and we were unable to recover it. 00:28:14.032 [2024-07-24 22:15:53.162350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.032 [2024-07-24 22:15:53.162362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.032 qpair failed and we were unable to recover it. 00:28:14.032 [2024-07-24 22:15:53.162682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.032 [2024-07-24 22:15:53.162695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.032 qpair failed and we were unable to recover it. 00:28:14.032 [2024-07-24 22:15:53.162866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.032 [2024-07-24 22:15:53.162878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.032 qpair failed and we were unable to recover it. 00:28:14.032 [2024-07-24 22:15:53.163154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.032 [2024-07-24 22:15:53.163169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.032 qpair failed and we were unable to recover it. 00:28:14.032 [2024-07-24 22:15:53.163330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.032 [2024-07-24 22:15:53.163342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.032 qpair failed and we were unable to recover it. 00:28:14.032 [2024-07-24 22:15:53.163519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.032 [2024-07-24 22:15:53.163531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.032 qpair failed and we were unable to recover it. 00:28:14.032 [2024-07-24 22:15:53.163707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.032 [2024-07-24 22:15:53.163724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.032 qpair failed and we were unable to recover it. 00:28:14.032 [2024-07-24 22:15:53.163876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.032 [2024-07-24 22:15:53.163888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.032 qpair failed and we were unable to recover it. 00:28:14.032 [2024-07-24 22:15:53.164059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.032 [2024-07-24 22:15:53.164071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.032 qpair failed and we were unable to recover it. 00:28:14.032 [2024-07-24 22:15:53.164290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.032 [2024-07-24 22:15:53.164302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.032 qpair failed and we were unable to recover it. 00:28:14.032 [2024-07-24 22:15:53.164541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.032 [2024-07-24 22:15:53.164553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.032 qpair failed and we were unable to recover it. 00:28:14.032 [2024-07-24 22:15:53.164710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.032 [2024-07-24 22:15:53.164728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.032 qpair failed and we were unable to recover it. 00:28:14.032 [2024-07-24 22:15:53.164960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.032 [2024-07-24 22:15:53.164972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.032 qpair failed and we were unable to recover it. 00:28:14.032 [2024-07-24 22:15:53.165200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.032 [2024-07-24 22:15:53.165212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.032 qpair failed and we were unable to recover it. 00:28:14.032 [2024-07-24 22:15:53.165449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.032 [2024-07-24 22:15:53.165461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.032 qpair failed and we were unable to recover it. 00:28:14.032 [2024-07-24 22:15:53.165692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.032 [2024-07-24 22:15:53.165704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.032 qpair failed and we were unable to recover it. 00:28:14.032 [2024-07-24 22:15:53.165952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.032 [2024-07-24 22:15:53.165964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.032 qpair failed and we were unable to recover it. 00:28:14.032 [2024-07-24 22:15:53.166190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.032 [2024-07-24 22:15:53.166202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.032 qpair failed and we were unable to recover it. 00:28:14.032 [2024-07-24 22:15:53.166454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.032 [2024-07-24 22:15:53.166466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.032 qpair failed and we were unable to recover it. 00:28:14.032 [2024-07-24 22:15:53.166698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.032 [2024-07-24 22:15:53.166710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.032 qpair failed and we were unable to recover it. 00:28:14.032 [2024-07-24 22:15:53.166884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.032 [2024-07-24 22:15:53.166896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.032 qpair failed and we were unable to recover it. 00:28:14.032 [2024-07-24 22:15:53.167072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.032 [2024-07-24 22:15:53.167084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.032 qpair failed and we were unable to recover it. 00:28:14.032 [2024-07-24 22:15:53.167332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.033 [2024-07-24 22:15:53.167344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.033 qpair failed and we were unable to recover it. 00:28:14.033 [2024-07-24 22:15:53.167562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.033 [2024-07-24 22:15:53.167574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.033 qpair failed and we were unable to recover it. 00:28:14.033 [2024-07-24 22:15:53.167809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.033 [2024-07-24 22:15:53.167821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.033 qpair failed and we were unable to recover it. 00:28:14.033 [2024-07-24 22:15:53.168011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.033 [2024-07-24 22:15:53.168023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.033 qpair failed and we were unable to recover it. 00:28:14.033 [2024-07-24 22:15:53.168250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.033 [2024-07-24 22:15:53.168261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.033 qpair failed and we were unable to recover it. 00:28:14.033 [2024-07-24 22:15:53.168421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.033 [2024-07-24 22:15:53.168433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.033 qpair failed and we were unable to recover it. 00:28:14.033 [2024-07-24 22:15:53.168651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.033 [2024-07-24 22:15:53.168663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.033 qpair failed and we were unable to recover it. 00:28:14.033 [2024-07-24 22:15:53.168820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.033 [2024-07-24 22:15:53.168833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.033 qpair failed and we were unable to recover it. 00:28:14.033 [2024-07-24 22:15:53.169017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.033 [2024-07-24 22:15:53.169029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.033 qpair failed and we were unable to recover it. 00:28:14.033 [2024-07-24 22:15:53.169248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.033 [2024-07-24 22:15:53.169260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.033 qpair failed and we were unable to recover it. 00:28:14.033 [2024-07-24 22:15:53.169422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.033 [2024-07-24 22:15:53.169434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.033 qpair failed and we were unable to recover it. 00:28:14.033 [2024-07-24 22:15:53.169592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.033 [2024-07-24 22:15:53.169604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.033 qpair failed and we were unable to recover it. 00:28:14.033 [2024-07-24 22:15:53.169890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.033 [2024-07-24 22:15:53.169902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.033 qpair failed and we were unable to recover it. 00:28:14.033 [2024-07-24 22:15:53.170054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.033 [2024-07-24 22:15:53.170065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.033 qpair failed and we were unable to recover it. 00:28:14.033 [2024-07-24 22:15:53.170214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.033 [2024-07-24 22:15:53.170226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.033 qpair failed and we were unable to recover it. 00:28:14.033 [2024-07-24 22:15:53.170396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.033 [2024-07-24 22:15:53.170408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.033 qpair failed and we were unable to recover it. 00:28:14.033 [2024-07-24 22:15:53.170503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.033 [2024-07-24 22:15:53.170515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.033 qpair failed and we were unable to recover it. 00:28:14.033 [2024-07-24 22:15:53.170691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.033 [2024-07-24 22:15:53.170703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.033 qpair failed and we were unable to recover it. 00:28:14.033 [2024-07-24 22:15:53.170994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.033 [2024-07-24 22:15:53.171006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.033 qpair failed and we were unable to recover it. 00:28:14.033 [2024-07-24 22:15:53.171173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.033 [2024-07-24 22:15:53.171184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.033 qpair failed and we were unable to recover it. 00:28:14.033 [2024-07-24 22:15:53.171335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.033 [2024-07-24 22:15:53.171347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.033 qpair failed and we were unable to recover it. 00:28:14.033 [2024-07-24 22:15:53.171528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.033 [2024-07-24 22:15:53.171542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.033 qpair failed and we were unable to recover it. 00:28:14.033 [2024-07-24 22:15:53.171850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.033 [2024-07-24 22:15:53.171862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.033 qpair failed and we were unable to recover it. 00:28:14.033 [2024-07-24 22:15:53.172098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.033 [2024-07-24 22:15:53.172110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.033 qpair failed and we were unable to recover it. 00:28:14.033 [2024-07-24 22:15:53.172343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.033 [2024-07-24 22:15:53.172355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.033 qpair failed and we were unable to recover it. 00:28:14.033 [2024-07-24 22:15:53.172623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.033 [2024-07-24 22:15:53.172635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.033 qpair failed and we were unable to recover it. 00:28:14.033 [2024-07-24 22:15:53.172876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.033 [2024-07-24 22:15:53.172888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.033 qpair failed and we were unable to recover it. 00:28:14.033 [2024-07-24 22:15:53.173110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.033 [2024-07-24 22:15:53.173122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.033 qpair failed and we were unable to recover it. 00:28:14.033 [2024-07-24 22:15:53.173413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.033 [2024-07-24 22:15:53.173425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.033 qpair failed and we were unable to recover it. 00:28:14.033 [2024-07-24 22:15:53.173641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.033 [2024-07-24 22:15:53.173653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.033 qpair failed and we were unable to recover it. 00:28:14.033 [2024-07-24 22:15:53.173892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.033 [2024-07-24 22:15:53.173909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.033 qpair failed and we were unable to recover it. 00:28:14.033 [2024-07-24 22:15:53.174058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.033 [2024-07-24 22:15:53.174070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.034 qpair failed and we were unable to recover it. 00:28:14.034 [2024-07-24 22:15:53.174384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.034 [2024-07-24 22:15:53.174396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.034 qpair failed and we were unable to recover it. 00:28:14.034 [2024-07-24 22:15:53.174569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.034 [2024-07-24 22:15:53.174581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.034 qpair failed and we were unable to recover it. 00:28:14.034 [2024-07-24 22:15:53.174751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.034 [2024-07-24 22:15:53.174763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.034 qpair failed and we were unable to recover it. 00:28:14.034 [2024-07-24 22:15:53.175053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.034 [2024-07-24 22:15:53.175065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.034 qpair failed and we were unable to recover it. 00:28:14.034 [2024-07-24 22:15:53.175331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.034 [2024-07-24 22:15:53.175343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.034 qpair failed and we were unable to recover it. 00:28:14.034 [2024-07-24 22:15:53.175632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.034 [2024-07-24 22:15:53.175644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.034 qpair failed and we were unable to recover it. 00:28:14.034 [2024-07-24 22:15:53.175937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.034 [2024-07-24 22:15:53.175949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.034 qpair failed and we were unable to recover it. 00:28:14.034 [2024-07-24 22:15:53.176170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.034 [2024-07-24 22:15:53.176182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.034 qpair failed and we were unable to recover it. 00:28:14.034 [2024-07-24 22:15:53.176335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.034 [2024-07-24 22:15:53.176347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.034 qpair failed and we were unable to recover it. 00:28:14.034 [2024-07-24 22:15:53.176563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.034 [2024-07-24 22:15:53.176575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.034 qpair failed and we were unable to recover it. 00:28:14.034 [2024-07-24 22:15:53.176814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.034 [2024-07-24 22:15:53.176826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.034 qpair failed and we were unable to recover it. 00:28:14.034 [2024-07-24 22:15:53.177065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.034 [2024-07-24 22:15:53.177077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.034 qpair failed and we were unable to recover it. 00:28:14.034 [2024-07-24 22:15:53.177317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.034 [2024-07-24 22:15:53.177329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.034 qpair failed and we were unable to recover it. 00:28:14.034 [2024-07-24 22:15:53.177568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.034 [2024-07-24 22:15:53.177580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.034 qpair failed and we were unable to recover it. 00:28:14.034 [2024-07-24 22:15:53.177835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.034 [2024-07-24 22:15:53.177847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.034 qpair failed and we were unable to recover it. 00:28:14.034 [2024-07-24 22:15:53.178088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.034 [2024-07-24 22:15:53.178100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.034 qpair failed and we were unable to recover it. 00:28:14.034 [2024-07-24 22:15:53.178270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.034 [2024-07-24 22:15:53.178282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.034 qpair failed and we were unable to recover it. 00:28:14.034 [2024-07-24 22:15:53.178565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.034 [2024-07-24 22:15:53.178577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.034 qpair failed and we were unable to recover it. 00:28:14.034 [2024-07-24 22:15:53.178751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.034 [2024-07-24 22:15:53.178763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.034 qpair failed and we were unable to recover it. 00:28:14.034 [2024-07-24 22:15:53.179001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.034 [2024-07-24 22:15:53.179014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.034 qpair failed and we were unable to recover it. 00:28:14.034 [2024-07-24 22:15:53.179248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.034 [2024-07-24 22:15:53.179260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.034 qpair failed and we were unable to recover it. 00:28:14.034 [2024-07-24 22:15:53.179545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.034 [2024-07-24 22:15:53.179557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.034 qpair failed and we were unable to recover it. 00:28:14.034 [2024-07-24 22:15:53.179727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.034 [2024-07-24 22:15:53.179740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.034 qpair failed and we were unable to recover it. 00:28:14.034 [2024-07-24 22:15:53.179926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.034 [2024-07-24 22:15:53.179938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.034 qpair failed and we were unable to recover it. 00:28:14.034 [2024-07-24 22:15:53.180091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.034 [2024-07-24 22:15:53.180102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.034 qpair failed and we were unable to recover it. 00:28:14.034 [2024-07-24 22:15:53.180409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.034 [2024-07-24 22:15:53.180422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.034 qpair failed and we were unable to recover it. 00:28:14.034 [2024-07-24 22:15:53.180637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.034 [2024-07-24 22:15:53.180649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.034 qpair failed and we were unable to recover it. 00:28:14.034 [2024-07-24 22:15:53.180816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.034 [2024-07-24 22:15:53.180828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.034 qpair failed and we were unable to recover it. 00:28:14.034 [2024-07-24 22:15:53.181055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.034 [2024-07-24 22:15:53.181067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.034 qpair failed and we were unable to recover it. 00:28:14.034 [2024-07-24 22:15:53.181224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.034 [2024-07-24 22:15:53.181237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.034 qpair failed and we were unable to recover it. 00:28:14.034 [2024-07-24 22:15:53.181476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.034 [2024-07-24 22:15:53.181488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.034 qpair failed and we were unable to recover it. 00:28:14.034 [2024-07-24 22:15:53.181659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.034 [2024-07-24 22:15:53.181671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.034 qpair failed and we were unable to recover it. 00:28:14.034 [2024-07-24 22:15:53.181886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.034 [2024-07-24 22:15:53.181898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.034 qpair failed and we were unable to recover it. 00:28:14.034 [2024-07-24 22:15:53.181995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.034 [2024-07-24 22:15:53.182006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.034 qpair failed and we were unable to recover it. 00:28:14.034 [2024-07-24 22:15:53.182228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.034 [2024-07-24 22:15:53.182240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.034 qpair failed and we were unable to recover it. 00:28:14.034 [2024-07-24 22:15:53.182464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.034 [2024-07-24 22:15:53.182476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.034 qpair failed and we were unable to recover it. 00:28:14.034 [2024-07-24 22:15:53.182712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.034 [2024-07-24 22:15:53.182727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.035 qpair failed and we were unable to recover it. 00:28:14.035 [2024-07-24 22:15:53.183012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.035 [2024-07-24 22:15:53.183024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.035 qpair failed and we were unable to recover it. 00:28:14.035 [2024-07-24 22:15:53.183246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.035 [2024-07-24 22:15:53.183258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.035 qpair failed and we were unable to recover it. 00:28:14.035 [2024-07-24 22:15:53.183424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.035 [2024-07-24 22:15:53.183436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.035 qpair failed and we were unable to recover it. 00:28:14.035 [2024-07-24 22:15:53.183675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.035 [2024-07-24 22:15:53.183687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.035 qpair failed and we were unable to recover it. 00:28:14.035 [2024-07-24 22:15:53.183845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.035 [2024-07-24 22:15:53.183857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.035 qpair failed and we were unable to recover it. 00:28:14.035 [2024-07-24 22:15:53.184023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.035 [2024-07-24 22:15:53.184035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.035 qpair failed and we were unable to recover it. 00:28:14.035 [2024-07-24 22:15:53.184267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.035 [2024-07-24 22:15:53.184279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.035 qpair failed and we were unable to recover it. 00:28:14.035 [2024-07-24 22:15:53.184526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.035 [2024-07-24 22:15:53.184538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.035 qpair failed and we were unable to recover it. 00:28:14.035 [2024-07-24 22:15:53.184768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.035 [2024-07-24 22:15:53.184780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.035 qpair failed and we were unable to recover it. 00:28:14.035 [2024-07-24 22:15:53.185068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.035 [2024-07-24 22:15:53.185081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.035 qpair failed and we were unable to recover it. 00:28:14.035 [2024-07-24 22:15:53.185249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.035 [2024-07-24 22:15:53.185261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.035 qpair failed and we were unable to recover it. 00:28:14.035 [2024-07-24 22:15:53.185429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.035 [2024-07-24 22:15:53.185441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.035 qpair failed and we were unable to recover it. 00:28:14.035 [2024-07-24 22:15:53.185660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.035 [2024-07-24 22:15:53.185673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.035 qpair failed and we were unable to recover it. 00:28:14.035 [2024-07-24 22:15:53.185845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.035 [2024-07-24 22:15:53.185857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.035 qpair failed and we were unable to recover it. 00:28:14.035 [2024-07-24 22:15:53.186162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.035 [2024-07-24 22:15:53.186174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.035 qpair failed and we were unable to recover it. 00:28:14.035 [2024-07-24 22:15:53.186280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.035 [2024-07-24 22:15:53.186292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.035 qpair failed and we were unable to recover it. 00:28:14.035 [2024-07-24 22:15:53.186454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.035 [2024-07-24 22:15:53.186466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.035 qpair failed and we were unable to recover it. 00:28:14.035 [2024-07-24 22:15:53.186720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.035 [2024-07-24 22:15:53.186732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.035 qpair failed and we were unable to recover it. 00:28:14.035 [2024-07-24 22:15:53.186903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.035 [2024-07-24 22:15:53.186915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.035 qpair failed and we were unable to recover it. 00:28:14.035 [2024-07-24 22:15:53.187161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.035 [2024-07-24 22:15:53.187173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.035 qpair failed and we were unable to recover it. 00:28:14.035 [2024-07-24 22:15:53.187405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.035 [2024-07-24 22:15:53.187417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.035 qpair failed and we were unable to recover it. 00:28:14.035 [2024-07-24 22:15:53.187602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.035 [2024-07-24 22:15:53.187614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.035 qpair failed and we were unable to recover it. 00:28:14.035 [2024-07-24 22:15:53.187764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.035 [2024-07-24 22:15:53.187776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.035 qpair failed and we were unable to recover it. 00:28:14.035 [2024-07-24 22:15:53.187995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.035 [2024-07-24 22:15:53.188007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.035 qpair failed and we were unable to recover it. 00:28:14.035 [2024-07-24 22:15:53.188295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.035 [2024-07-24 22:15:53.188308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.035 qpair failed and we were unable to recover it. 00:28:14.035 [2024-07-24 22:15:53.188526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.035 [2024-07-24 22:15:53.188537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.035 qpair failed and we were unable to recover it. 00:28:14.035 [2024-07-24 22:15:53.188839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.035 [2024-07-24 22:15:53.188851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.035 qpair failed and we were unable to recover it. 00:28:14.035 [2024-07-24 22:15:53.189160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.035 [2024-07-24 22:15:53.189172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.035 qpair failed and we were unable to recover it. 00:28:14.035 [2024-07-24 22:15:53.189346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.035 [2024-07-24 22:15:53.189358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.035 qpair failed and we were unable to recover it. 00:28:14.035 [2024-07-24 22:15:53.189574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.035 [2024-07-24 22:15:53.189586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.035 qpair failed and we were unable to recover it. 00:28:14.035 [2024-07-24 22:15:53.189769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.035 [2024-07-24 22:15:53.189781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.035 qpair failed and we were unable to recover it. 00:28:14.035 [2024-07-24 22:15:53.190018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.035 [2024-07-24 22:15:53.190030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.035 qpair failed and we were unable to recover it. 00:28:14.035 [2024-07-24 22:15:53.190313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.035 [2024-07-24 22:15:53.190327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.035 qpair failed and we were unable to recover it. 00:28:14.035 [2024-07-24 22:15:53.190590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.035 [2024-07-24 22:15:53.190603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.035 qpair failed and we were unable to recover it. 00:28:14.035 [2024-07-24 22:15:53.190928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.035 [2024-07-24 22:15:53.190941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.035 qpair failed and we were unable to recover it. 00:28:14.035 [2024-07-24 22:15:53.191194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.035 [2024-07-24 22:15:53.191206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.035 qpair failed and we were unable to recover it. 00:28:14.035 [2024-07-24 22:15:53.191385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.035 [2024-07-24 22:15:53.191398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.035 qpair failed and we were unable to recover it. 00:28:14.036 [2024-07-24 22:15:53.191633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.036 [2024-07-24 22:15:53.191645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.036 qpair failed and we were unable to recover it. 00:28:14.036 [2024-07-24 22:15:53.191873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.036 [2024-07-24 22:15:53.191885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.036 qpair failed and we were unable to recover it. 00:28:14.036 [2024-07-24 22:15:53.192104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.036 [2024-07-24 22:15:53.192116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.036 qpair failed and we were unable to recover it. 00:28:14.036 [2024-07-24 22:15:53.192336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.036 [2024-07-24 22:15:53.192348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.036 qpair failed and we were unable to recover it. 00:28:14.036 [2024-07-24 22:15:53.192585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.036 [2024-07-24 22:15:53.192597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.036 qpair failed and we were unable to recover it. 00:28:14.036 [2024-07-24 22:15:53.192904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.036 [2024-07-24 22:15:53.192916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.036 qpair failed and we were unable to recover it. 00:28:14.036 [2024-07-24 22:15:53.193085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.036 [2024-07-24 22:15:53.193097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.036 qpair failed and we were unable to recover it. 00:28:14.036 [2024-07-24 22:15:53.193320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.036 [2024-07-24 22:15:53.193332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.036 qpair failed and we were unable to recover it. 00:28:14.036 [2024-07-24 22:15:53.193618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.036 [2024-07-24 22:15:53.193630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.036 qpair failed and we were unable to recover it. 00:28:14.036 [2024-07-24 22:15:53.193860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.036 [2024-07-24 22:15:53.193873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.036 qpair failed and we were unable to recover it. 00:28:14.036 [2024-07-24 22:15:53.194174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.036 [2024-07-24 22:15:53.194186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.036 qpair failed and we were unable to recover it. 00:28:14.036 [2024-07-24 22:15:53.194351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.036 [2024-07-24 22:15:53.194363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.036 qpair failed and we were unable to recover it. 00:28:14.036 [2024-07-24 22:15:53.194673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.036 [2024-07-24 22:15:53.194685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.036 qpair failed and we were unable to recover it. 00:28:14.036 [2024-07-24 22:15:53.194908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.036 [2024-07-24 22:15:53.194920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.036 qpair failed and we were unable to recover it. 00:28:14.036 [2024-07-24 22:15:53.195199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.036 [2024-07-24 22:15:53.195211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.036 qpair failed and we were unable to recover it. 00:28:14.036 [2024-07-24 22:15:53.195497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.036 [2024-07-24 22:15:53.195509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.036 qpair failed and we were unable to recover it. 00:28:14.036 [2024-07-24 22:15:53.195746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.036 [2024-07-24 22:15:53.195758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.036 qpair failed and we were unable to recover it. 00:28:14.036 [2024-07-24 22:15:53.196043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.036 [2024-07-24 22:15:53.196055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.036 qpair failed and we were unable to recover it. 00:28:14.036 [2024-07-24 22:15:53.196224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.036 [2024-07-24 22:15:53.196236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.036 qpair failed and we were unable to recover it. 00:28:14.036 [2024-07-24 22:15:53.196403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.036 [2024-07-24 22:15:53.196415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.036 qpair failed and we were unable to recover it. 00:28:14.036 [2024-07-24 22:15:53.196723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.036 [2024-07-24 22:15:53.196735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.036 qpair failed and we were unable to recover it. 00:28:14.036 [2024-07-24 22:15:53.196924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.036 [2024-07-24 22:15:53.196936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.036 qpair failed and we were unable to recover it. 00:28:14.036 [2024-07-24 22:15:53.197280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.036 [2024-07-24 22:15:53.197292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.036 qpair failed and we were unable to recover it. 00:28:14.036 [2024-07-24 22:15:53.197476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.036 [2024-07-24 22:15:53.197488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.036 qpair failed and we were unable to recover it. 00:28:14.036 [2024-07-24 22:15:53.197705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.036 [2024-07-24 22:15:53.197721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.036 qpair failed and we were unable to recover it. 00:28:14.036 [2024-07-24 22:15:53.197897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.036 [2024-07-24 22:15:53.197910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.036 qpair failed and we were unable to recover it. 00:28:14.036 [2024-07-24 22:15:53.198126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.036 [2024-07-24 22:15:53.198138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.036 qpair failed and we were unable to recover it. 00:28:14.036 [2024-07-24 22:15:53.198307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.036 [2024-07-24 22:15:53.198319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.036 qpair failed and we were unable to recover it. 00:28:14.036 [2024-07-24 22:15:53.198631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.036 [2024-07-24 22:15:53.198643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.036 qpair failed and we were unable to recover it. 00:28:14.036 [2024-07-24 22:15:53.198819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.036 [2024-07-24 22:15:53.198831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.036 qpair failed and we were unable to recover it. 00:28:14.036 [2024-07-24 22:15:53.199068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.036 [2024-07-24 22:15:53.199080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.036 qpair failed and we were unable to recover it. 00:28:14.036 [2024-07-24 22:15:53.199377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.036 [2024-07-24 22:15:53.199389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.036 qpair failed and we were unable to recover it. 00:28:14.036 [2024-07-24 22:15:53.199635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.036 [2024-07-24 22:15:53.199647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.036 qpair failed and we were unable to recover it. 00:28:14.036 [2024-07-24 22:15:53.199888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.036 [2024-07-24 22:15:53.199900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.036 qpair failed and we were unable to recover it. 00:28:14.036 [2024-07-24 22:15:53.200134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.036 [2024-07-24 22:15:53.200146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.036 qpair failed and we were unable to recover it. 00:28:14.036 [2024-07-24 22:15:53.200408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.036 [2024-07-24 22:15:53.200422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.036 qpair failed and we were unable to recover it. 00:28:14.036 [2024-07-24 22:15:53.200649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.036 [2024-07-24 22:15:53.200661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.036 qpair failed and we were unable to recover it. 00:28:14.036 [2024-07-24 22:15:53.200876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.036 [2024-07-24 22:15:53.200888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.037 qpair failed and we were unable to recover it. 00:28:14.037 [2024-07-24 22:15:53.201122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.037 [2024-07-24 22:15:53.201133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.037 qpair failed and we were unable to recover it. 00:28:14.037 [2024-07-24 22:15:53.201466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.037 [2024-07-24 22:15:53.201478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.037 qpair failed and we were unable to recover it. 00:28:14.037 [2024-07-24 22:15:53.201735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.037 [2024-07-24 22:15:53.201748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.037 qpair failed and we were unable to recover it. 00:28:14.037 [2024-07-24 22:15:53.202064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.037 [2024-07-24 22:15:53.202076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.037 qpair failed and we were unable to recover it. 00:28:14.316 [2024-07-24 22:15:53.202311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.316 [2024-07-24 22:15:53.202324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.316 qpair failed and we were unable to recover it. 00:28:14.316 [2024-07-24 22:15:53.202547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.316 [2024-07-24 22:15:53.202560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.316 qpair failed and we were unable to recover it. 00:28:14.316 [2024-07-24 22:15:53.202820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.316 [2024-07-24 22:15:53.202833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.316 qpair failed and we were unable to recover it. 00:28:14.316 [2024-07-24 22:15:53.203069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.316 [2024-07-24 22:15:53.203081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.316 qpair failed and we were unable to recover it. 00:28:14.316 [2024-07-24 22:15:53.203254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.316 [2024-07-24 22:15:53.203266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.316 qpair failed and we were unable to recover it. 00:28:14.316 [2024-07-24 22:15:53.203500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.316 [2024-07-24 22:15:53.203512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.316 qpair failed and we were unable to recover it. 00:28:14.316 [2024-07-24 22:15:53.203795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.316 [2024-07-24 22:15:53.203807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.316 qpair failed and we were unable to recover it. 00:28:14.316 [2024-07-24 22:15:53.204042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.316 [2024-07-24 22:15:53.204054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.316 qpair failed and we were unable to recover it. 00:28:14.316 [2024-07-24 22:15:53.204286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.316 [2024-07-24 22:15:53.204298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.316 qpair failed and we were unable to recover it. 00:28:14.316 [2024-07-24 22:15:53.204531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.316 [2024-07-24 22:15:53.204543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.316 qpair failed and we were unable to recover it. 00:28:14.316 [2024-07-24 22:15:53.204848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.316 [2024-07-24 22:15:53.204861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.316 qpair failed and we were unable to recover it. 00:28:14.316 [2024-07-24 22:15:53.205041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.316 [2024-07-24 22:15:53.205053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.316 qpair failed and we were unable to recover it. 00:28:14.316 [2024-07-24 22:15:53.205274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.316 [2024-07-24 22:15:53.205286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.316 qpair failed and we were unable to recover it. 00:28:14.316 [2024-07-24 22:15:53.205529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.316 [2024-07-24 22:15:53.205541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.316 qpair failed and we were unable to recover it. 00:28:14.316 [2024-07-24 22:15:53.205765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.316 [2024-07-24 22:15:53.205778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.316 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.205961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.205973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.206154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.206165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.206428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.206440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.206623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.206635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.206971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.206983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.207213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.207225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.207462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.207474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.207742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.207754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.207978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.207990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.208216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.208228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.208386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.208398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.208628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.208640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.208805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.208817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.209125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.209137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.209304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.209316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.209549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.209561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.209721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.209733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.209845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.209857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.210075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.210089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.210374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.210386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.210607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.210619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.210805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.210818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.210992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.211003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.211248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.211261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.211496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.211508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.211739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.211751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.211920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.211932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.212160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.212172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.212455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.212467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.212754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.212766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.212995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.213007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.213239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.213251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.213472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.213485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.213728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.213740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.213908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.213920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.214173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.214185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.214426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.214438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.214610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.214622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.214818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.214830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.215064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.215076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.215336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.215347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.215582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.215594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.215894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.215906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.216131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.216143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.216466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.216478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.216644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.216655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.216890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.216902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.217067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.217079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.217245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.217257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.217415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.217427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.217756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.217769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.218070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.218082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.218392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.218404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.218722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.218734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.219043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.219055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.219271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.317 [2024-07-24 22:15:53.219283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.317 qpair failed and we were unable to recover it. 00:28:14.317 [2024-07-24 22:15:53.219569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.219581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.219749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.219761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.220009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.220023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.220268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.220280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.220573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.220585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.220890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.220902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.221235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.221247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.221554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.221566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.221737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.221750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.221997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.222010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.222227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.222239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.222565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.222577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.222807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.222820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.223035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.223047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.223262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.223274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.223510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.223523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.223739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.223751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.223986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.223998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.224286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.224298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.224454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.224466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.224633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.224645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.224878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.224890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.225120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.225132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.225452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.225464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.225752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.225764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.226066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.226078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.226297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.226308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.226468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.226480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.226628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.226640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.226924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.226936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.227167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.227179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.227396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.227408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.227635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.227647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.227882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.227895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.228150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.228162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.228341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.228353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.228504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.228516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.228776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.228788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.229010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.229022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.229272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.229284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.229522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.229534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.229719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.229732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.229959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.229973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.230191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.230203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.230511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.230523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.230843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.230855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.231181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.231193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.231526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.231538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.231849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.231862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.231965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.231977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.232266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.232278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.232507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.232519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.232752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.318 [2024-07-24 22:15:53.232764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.318 qpair failed and we were unable to recover it. 00:28:14.318 [2024-07-24 22:15:53.233073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.319 [2024-07-24 22:15:53.233085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.319 qpair failed and we were unable to recover it. 00:28:14.319 [2024-07-24 22:15:53.233320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.319 [2024-07-24 22:15:53.233332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.319 qpair failed and we were unable to recover it. 00:28:14.319 [2024-07-24 22:15:53.233568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.319 [2024-07-24 22:15:53.233580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.319 qpair failed and we were unable to recover it. 00:28:14.319 [2024-07-24 22:15:53.233800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.319 [2024-07-24 22:15:53.233812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.319 qpair failed and we were unable to recover it. 00:28:14.319 [2024-07-24 22:15:53.234048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.319 [2024-07-24 22:15:53.234060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.319 qpair failed and we were unable to recover it. 00:28:14.319 [2024-07-24 22:15:53.234310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.319 [2024-07-24 22:15:53.234322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.319 qpair failed and we were unable to recover it. 00:28:14.319 [2024-07-24 22:15:53.234656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.319 [2024-07-24 22:15:53.234668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.319 qpair failed and we were unable to recover it. 00:28:14.319 [2024-07-24 22:15:53.234978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.319 [2024-07-24 22:15:53.234991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.319 qpair failed and we were unable to recover it. 00:28:14.319 [2024-07-24 22:15:53.235220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.319 [2024-07-24 22:15:53.235232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.319 qpair failed and we were unable to recover it. 00:28:14.319 [2024-07-24 22:15:53.235449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.319 [2024-07-24 22:15:53.235460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.319 qpair failed and we were unable to recover it. 00:28:14.319 [2024-07-24 22:15:53.235728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.319 [2024-07-24 22:15:53.235741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.319 qpair failed and we were unable to recover it. 00:28:14.319 [2024-07-24 22:15:53.235971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.319 [2024-07-24 22:15:53.235983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.319 qpair failed and we were unable to recover it. 00:28:14.319 [2024-07-24 22:15:53.236242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.319 [2024-07-24 22:15:53.236254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.319 qpair failed and we were unable to recover it. 00:28:14.319 [2024-07-24 22:15:53.236489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.319 [2024-07-24 22:15:53.236501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.319 qpair failed and we were unable to recover it. 00:28:14.319 [2024-07-24 22:15:53.236810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.319 [2024-07-24 22:15:53.236822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.319 qpair failed and we were unable to recover it. 00:28:14.319 [2024-07-24 22:15:53.237063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.319 [2024-07-24 22:15:53.237075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.319 qpair failed and we were unable to recover it. 00:28:14.319 [2024-07-24 22:15:53.237321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.319 [2024-07-24 22:15:53.237333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.319 qpair failed and we were unable to recover it. 00:28:14.319 [2024-07-24 22:15:53.237644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.319 [2024-07-24 22:15:53.237657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.319 qpair failed and we were unable to recover it. 00:28:14.319 [2024-07-24 22:15:53.237895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.319 [2024-07-24 22:15:53.237907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.319 qpair failed and we were unable to recover it. 00:28:14.319 [2024-07-24 22:15:53.238151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.319 [2024-07-24 22:15:53.238163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.319 qpair failed and we were unable to recover it. 00:28:14.319 [2024-07-24 22:15:53.238449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.319 [2024-07-24 22:15:53.238461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.319 qpair failed and we were unable to recover it. 00:28:14.319 [2024-07-24 22:15:53.238691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.319 [2024-07-24 22:15:53.238703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.319 qpair failed and we were unable to recover it. 00:28:14.319 [2024-07-24 22:15:53.238927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.319 [2024-07-24 22:15:53.238939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.319 qpair failed and we were unable to recover it. 00:28:14.319 [2024-07-24 22:15:53.239223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.319 [2024-07-24 22:15:53.239235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.319 qpair failed and we were unable to recover it. 00:28:14.319 [2024-07-24 22:15:53.239398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.319 [2024-07-24 22:15:53.239410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.319 qpair failed and we were unable to recover it. 00:28:14.319 [2024-07-24 22:15:53.239630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.319 [2024-07-24 22:15:53.239643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.319 qpair failed and we were unable to recover it. 00:28:14.319 [2024-07-24 22:15:53.239879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.319 [2024-07-24 22:15:53.239891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.319 qpair failed and we were unable to recover it. 00:28:14.319 [2024-07-24 22:15:53.240197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.319 [2024-07-24 22:15:53.240209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.319 qpair failed and we were unable to recover it. 00:28:14.319 [2024-07-24 22:15:53.240442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.319 [2024-07-24 22:15:53.240454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.319 qpair failed and we were unable to recover it. 00:28:14.319 [2024-07-24 22:15:53.240722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.319 [2024-07-24 22:15:53.240736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.319 qpair failed and we were unable to recover it. 00:28:14.319 [2024-07-24 22:15:53.240917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.319 [2024-07-24 22:15:53.240929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.319 qpair failed and we were unable to recover it. 00:28:14.319 [2024-07-24 22:15:53.241230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.319 [2024-07-24 22:15:53.241242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.319 qpair failed and we were unable to recover it. 00:28:14.319 [2024-07-24 22:15:53.241502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.319 [2024-07-24 22:15:53.241514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.319 qpair failed and we were unable to recover it. 00:28:14.319 [2024-07-24 22:15:53.241693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.319 [2024-07-24 22:15:53.241705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.319 qpair failed and we were unable to recover it. 00:28:14.319 [2024-07-24 22:15:53.242018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.319 [2024-07-24 22:15:53.242030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.319 qpair failed and we were unable to recover it. 00:28:14.319 [2024-07-24 22:15:53.242325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.319 [2024-07-24 22:15:53.242337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.319 qpair failed and we were unable to recover it. 00:28:14.319 [2024-07-24 22:15:53.242427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.319 [2024-07-24 22:15:53.242439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.319 qpair failed and we were unable to recover it. 00:28:14.319 [2024-07-24 22:15:53.242670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.319 [2024-07-24 22:15:53.242683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.319 qpair failed and we were unable to recover it. 00:28:14.319 [2024-07-24 22:15:53.242916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.319 [2024-07-24 22:15:53.242928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.319 qpair failed and we were unable to recover it. 00:28:14.319 [2024-07-24 22:15:53.243235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.319 [2024-07-24 22:15:53.243247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.319 qpair failed and we were unable to recover it. 00:28:14.319 [2024-07-24 22:15:53.243411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.319 [2024-07-24 22:15:53.243423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.319 qpair failed and we were unable to recover it. 00:28:14.319 [2024-07-24 22:15:53.243609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.319 [2024-07-24 22:15:53.243621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.319 qpair failed and we were unable to recover it. 00:28:14.319 [2024-07-24 22:15:53.243855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.319 [2024-07-24 22:15:53.243867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.319 qpair failed and we were unable to recover it. 00:28:14.319 [2024-07-24 22:15:53.244057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.319 [2024-07-24 22:15:53.244069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.319 qpair failed and we were unable to recover it. 00:28:14.319 [2024-07-24 22:15:53.244353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.319 [2024-07-24 22:15:53.244365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.319 qpair failed and we were unable to recover it. 00:28:14.320 [2024-07-24 22:15:53.244593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.320 [2024-07-24 22:15:53.244605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.320 qpair failed and we were unable to recover it. 00:28:14.320 [2024-07-24 22:15:53.244824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.320 [2024-07-24 22:15:53.244836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.320 qpair failed and we were unable to recover it. 00:28:14.320 [2024-07-24 22:15:53.245047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.320 [2024-07-24 22:15:53.245059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.320 qpair failed and we were unable to recover it. 00:28:14.320 [2024-07-24 22:15:53.245279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.320 [2024-07-24 22:15:53.245291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.320 qpair failed and we were unable to recover it. 00:28:14.320 [2024-07-24 22:15:53.245532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.320 [2024-07-24 22:15:53.245545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.320 qpair failed and we were unable to recover it. 00:28:14.320 [2024-07-24 22:15:53.245698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.320 [2024-07-24 22:15:53.245710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.320 qpair failed and we were unable to recover it. 00:28:14.320 [2024-07-24 22:15:53.246004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.320 [2024-07-24 22:15:53.246016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.320 qpair failed and we were unable to recover it. 00:28:14.320 [2024-07-24 22:15:53.246284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.320 [2024-07-24 22:15:53.246296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.320 qpair failed and we were unable to recover it. 00:28:14.320 [2024-07-24 22:15:53.246515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.320 [2024-07-24 22:15:53.246527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.320 qpair failed and we were unable to recover it. 00:28:14.320 [2024-07-24 22:15:53.246754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.320 [2024-07-24 22:15:53.246766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.320 qpair failed and we were unable to recover it. 00:28:14.320 [2024-07-24 22:15:53.246998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.320 [2024-07-24 22:15:53.247010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.320 qpair failed and we were unable to recover it. 00:28:14.320 [2024-07-24 22:15:53.247309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.320 [2024-07-24 22:15:53.247323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.320 qpair failed and we were unable to recover it. 00:28:14.320 [2024-07-24 22:15:53.247479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.320 [2024-07-24 22:15:53.247491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.320 qpair failed and we were unable to recover it. 00:28:14.320 [2024-07-24 22:15:53.247814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.320 [2024-07-24 22:15:53.247826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.320 qpair failed and we were unable to recover it. 00:28:14.320 [2024-07-24 22:15:53.248085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.320 [2024-07-24 22:15:53.248097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.320 qpair failed and we were unable to recover it. 00:28:14.320 [2024-07-24 22:15:53.248351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.320 [2024-07-24 22:15:53.248363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.320 qpair failed and we were unable to recover it. 00:28:14.320 [2024-07-24 22:15:53.248609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.320 [2024-07-24 22:15:53.248621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.320 qpair failed and we were unable to recover it. 00:28:14.320 [2024-07-24 22:15:53.248883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.320 [2024-07-24 22:15:53.248895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.320 qpair failed and we were unable to recover it. 00:28:14.320 [2024-07-24 22:15:53.249179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.320 [2024-07-24 22:15:53.249191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.320 qpair failed and we were unable to recover it. 00:28:14.320 [2024-07-24 22:15:53.249500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.320 [2024-07-24 22:15:53.249512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.320 qpair failed and we were unable to recover it. 00:28:14.320 [2024-07-24 22:15:53.249729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.320 [2024-07-24 22:15:53.249741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.320 qpair failed and we were unable to recover it. 00:28:14.320 [2024-07-24 22:15:53.249915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.320 [2024-07-24 22:15:53.249927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.320 qpair failed and we were unable to recover it. 00:28:14.320 [2024-07-24 22:15:53.250237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.320 [2024-07-24 22:15:53.250249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.320 qpair failed and we were unable to recover it. 00:28:14.320 [2024-07-24 22:15:53.250585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.320 [2024-07-24 22:15:53.250596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.320 qpair failed and we were unable to recover it. 00:28:14.320 [2024-07-24 22:15:53.250879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.320 [2024-07-24 22:15:53.250891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.320 qpair failed and we were unable to recover it. 00:28:14.320 [2024-07-24 22:15:53.251201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.320 [2024-07-24 22:15:53.251212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.320 qpair failed and we were unable to recover it. 00:28:14.320 [2024-07-24 22:15:53.251519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.320 [2024-07-24 22:15:53.251531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.320 qpair failed and we were unable to recover it. 00:28:14.320 [2024-07-24 22:15:53.251759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.320 [2024-07-24 22:15:53.251771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.320 qpair failed and we were unable to recover it. 00:28:14.320 [2024-07-24 22:15:53.252063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.320 [2024-07-24 22:15:53.252075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.320 qpair failed and we were unable to recover it. 00:28:14.320 [2024-07-24 22:15:53.252309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.320 [2024-07-24 22:15:53.252321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.320 qpair failed and we were unable to recover it. 00:28:14.320 [2024-07-24 22:15:53.252538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.320 [2024-07-24 22:15:53.252551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.320 qpair failed and we were unable to recover it. 00:28:14.320 [2024-07-24 22:15:53.252837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.320 [2024-07-24 22:15:53.252849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.320 qpair failed and we were unable to recover it. 00:28:14.320 [2024-07-24 22:15:53.253066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.320 [2024-07-24 22:15:53.253078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.320 qpair failed and we were unable to recover it. 00:28:14.320 [2024-07-24 22:15:53.253248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.320 [2024-07-24 22:15:53.253260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.320 qpair failed and we were unable to recover it. 00:28:14.320 [2024-07-24 22:15:53.253502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.320 [2024-07-24 22:15:53.253514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.320 qpair failed and we were unable to recover it. 00:28:14.320 [2024-07-24 22:15:53.253748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.320 [2024-07-24 22:15:53.253760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.320 qpair failed and we were unable to recover it. 00:28:14.320 [2024-07-24 22:15:53.254062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.320 [2024-07-24 22:15:53.254074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.320 qpair failed and we were unable to recover it. 00:28:14.320 [2024-07-24 22:15:53.254224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.320 [2024-07-24 22:15:53.254236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.320 qpair failed and we were unable to recover it. 00:28:14.320 [2024-07-24 22:15:53.254471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.320 [2024-07-24 22:15:53.254483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.320 qpair failed and we were unable to recover it. 00:28:14.320 [2024-07-24 22:15:53.254661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.320 [2024-07-24 22:15:53.254673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.320 qpair failed and we were unable to recover it. 00:28:14.320 [2024-07-24 22:15:53.254980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.320 [2024-07-24 22:15:53.254992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.255222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.255234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.255399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.255411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.255732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.255744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.255983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.255995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.256280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.256292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.256617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.256629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.256866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.256878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.257108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.257119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.257362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.257374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.257608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.257620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.257790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.257804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.258043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.258055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.258270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.258282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.258511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.258522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.258833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.258845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.258952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.258964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.259270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.259282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.259601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.259613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.259909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.259921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.260228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.260240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.260469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.260481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.260700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.260712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.260957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.260969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.261155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.261167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.261408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.261420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.261639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.261651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.261804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.261816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.262043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.262055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.262204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.262216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.262524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.262536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.262790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.262802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.263030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.263042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.263219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.263231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.263465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.263477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.263786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.263827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.264122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.264162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.264512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.264551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.264866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.264906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.265282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.265322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.265670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.265709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.265992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.266032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.266368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.266408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.266707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.266758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.267039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.267078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.267291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.267303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.267596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.267637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.267990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.268030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.268398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.268411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.268640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.268652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.268885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.268898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.269081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.269095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.269425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.269437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.269651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.269663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.269909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.269922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.270233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.270272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.270496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.270536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.270881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.270922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.321 qpair failed and we were unable to recover it. 00:28:14.321 [2024-07-24 22:15:53.271213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.321 [2024-07-24 22:15:53.271252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.271479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.271519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.271801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.271842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.272189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.272229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.272464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.272503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.272747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.272789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.273100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.273141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.273374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.273387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.273543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.273555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.273797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.273809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.274036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.274049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.274353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.274365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.274481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.274520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.274846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.274887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.275259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.275299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.275625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.275664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.275976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.276016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.276233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.276246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.276477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.276488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.276708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.276724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.277094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.277134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.277428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.277468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.277760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.277778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.277957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.277970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.278273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.278286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.278488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.278501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.278770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.278787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.278979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.279000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.279201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.279223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.279421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.279440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.279696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.279710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.279817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.279829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.280048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.280061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.280297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.280312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.280591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.280603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.280835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.280847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.281101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.281113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.281360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.281372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.281557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.281569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.281739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.281751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.281920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.281932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.282194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.282207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.282515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.282527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.282695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.282707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.283038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.283050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.283251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.283263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.283495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.283507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.283741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.283754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.283907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.283919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.284203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.284215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.284431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.284444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.284603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.284615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.284913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.284926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.285210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.285223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.285448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.285461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.285718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.285731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.285987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.285999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.286161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.286173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.286386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.286399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.286610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.286623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.286856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.286869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.287144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.287156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.287381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.287393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.287613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.322 [2024-07-24 22:15:53.287625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.322 qpair failed and we were unable to recover it. 00:28:14.322 [2024-07-24 22:15:53.287778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.287790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.288014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.288026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.288277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.288289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.288510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.288522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.288755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.288767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.289071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.289084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.289370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.289382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.289635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.289647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.289863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.289876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.290044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.290058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.290316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.290328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.290577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.290589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.290818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.290831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.291081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.291094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.291268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.291280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.291494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.291506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.291803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.291815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.292127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.292139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.292379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.292391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.292620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.292633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.292809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.292821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.292985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.292997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.293230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.293243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.293552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.293565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.293849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.293862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.294184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.294196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.294377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.294389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.294609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.294621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.294851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.294863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.295100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.295112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.295409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.295421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.295730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.295743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.296027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.296039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.296254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.296266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.296519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.296531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.296702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.296718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.296935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.296948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.297186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.297198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.297426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.297439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.297593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.297605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.297783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.297795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.298105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.298117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.298413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.298426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.298641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.298653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.298837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.298850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.299083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.299095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.299269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.299281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.299500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.299512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.299843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.299856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.300009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.300024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.300177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.300189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.300423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.300435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.300534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.300546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.300721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.300734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.301016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.301028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.301207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.301219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.301383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.301395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.301708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.301724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.301819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.301830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.302116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.302128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.302354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.302367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.302650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-07-24 22:15:53.302663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.323 qpair failed and we were unable to recover it. 00:28:14.323 [2024-07-24 22:15:53.302994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.303007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.303178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.303190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.303349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.303362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.303594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.303607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.303860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.303872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.304107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.304119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.304340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.304353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.304659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.304671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.304994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.305007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.305191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.305203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.305513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.305525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.305750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.305763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.305991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.306003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.306220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.306231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.306540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.306552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.306768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.306781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.307015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.307027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.307274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.307287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.307517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.307529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.307760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.307773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.308068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.308080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.308375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.308388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.308722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.308735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.308963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.308976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.309137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.309149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.309335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.309347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.309601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.309614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.309829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.309843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.310070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.310082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.310321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.310333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.310656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.310669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.310906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.310918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.311135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.311148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.311314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.311326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.311631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.311643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.311859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.311871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.312024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.312036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.312265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.312277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.312563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.312576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.312814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.312827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.313152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.313165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.313401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.313414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.313704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.313721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2852646 Killed "${NVMF_APP[@]}" "$@" 00:28:14.324 [2024-07-24 22:15:53.313899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.313912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.314219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.314231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.314409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.314421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 22:15:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:28:14.324 [2024-07-24 22:15:53.314664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.314678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.314848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.314861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 22:15:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:14.324 [2024-07-24 22:15:53.315101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.315115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.315357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.315370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 22:15:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:14.324 [2024-07-24 22:15:53.315571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.315583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.315809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.315823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 22:15:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:14.324 [2024-07-24 22:15:53.316109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.316124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 22:15:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:14.324 [2024-07-24 22:15:53.316347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.316360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.316534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.316546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.316763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.316776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.317065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.317077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.317308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.317320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.317568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.317581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.317827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.317841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.318108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.318121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.318340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.318352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.318660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.318674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.318892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.318904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.319163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.319175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.319357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.319369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.319602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-07-24 22:15:53.319615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.324 qpair failed and we were unable to recover it. 00:28:14.324 [2024-07-24 22:15:53.319830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.319843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.320076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.320088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.320246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.320258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.320472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.320485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.320661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.320673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.320935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.320948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.321183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.321197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.321415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.321428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.321683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.321696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.321940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.321953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.322271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.322283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.322571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.322584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.322813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.322826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.323133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.323145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.323327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.323339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.323516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.323528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.323704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.323720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.323946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.323958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.324178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.324191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.324491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.324505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 22:15:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2853517 00:28:14.325 [2024-07-24 22:15:53.324735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.324749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 22:15:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2853517 00:28:14.325 22:15:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:14.325 [2024-07-24 22:15:53.324971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.324988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.325272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.325285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 22:15:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2853517 ']' 00:28:14.325 22:15:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:14.325 [2024-07-24 22:15:53.325595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.325610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.325867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 22:15:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:14.325 [2024-07-24 22:15:53.325881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.326103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.326116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 22:15:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:14.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:14.325 [2024-07-24 22:15:53.326382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.326395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 22:15:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:14.325 [2024-07-24 22:15:53.326625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.326639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 22:15:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:14.325 [2024-07-24 22:15:53.326895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.326910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.327128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.327141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.327461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.327474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.327784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.327796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.327954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.327967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.328131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.328143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.328437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.328450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.328687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.328699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.328934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.328948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.329237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.329250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.329495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.329507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.329689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.329702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.329869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.329883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.330109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.330122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.330337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.330350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.330535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.330548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.330785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.330801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.331035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.331048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.331264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.331278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.331502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.331515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.331739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.331753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.331993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.332006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.332237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.332251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.332478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.332492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.332744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.332757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.333094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.333107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.333361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.333373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.333539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.333552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.333779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.333792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.334102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.334114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.334424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.334436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.334664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.334676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.334901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.334916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.335106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-07-24 22:15:53.335118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.325 qpair failed and we were unable to recover it. 00:28:14.325 [2024-07-24 22:15:53.335357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.335371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.335544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.335556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.335790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.335803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.335967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.335979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.336194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.336208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.336443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.336456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.336753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.336766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.337007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.337021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.337183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.337196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.337349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.337362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.337578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.337591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.337854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.337867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.338029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.338042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.338349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.338362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.338526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.338539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.338846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.338859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.339077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.339090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.339248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.339260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.339502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.339515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.339681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.339694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.339941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.339953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.340167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.340180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.340342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.340356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.340522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.340534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.340755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.340770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.341015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.341028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.341191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.341203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.341474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.341488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.341720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.341733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.341914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.341926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.342160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.342173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.342330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.342343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.342577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.342590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.342828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.342841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.343075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.343088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.343242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.343254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.343560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.343572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.343792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.343805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.343919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.343932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.344164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.344178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.344486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.344498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.344722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.344736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.344911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.344924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.345140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.345153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.345386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.345400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.345552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.345565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.345799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.345811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.345987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.345999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.346168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.346182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.346343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.346355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.346538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.346551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.346770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.346783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.347021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.347034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.347189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.347202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.347349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.347362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.347601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.347613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.347770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.347783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.348021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.348034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.348195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.348207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.348358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.348370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.348619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.348632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.348916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.348929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.349093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.349105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.349201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.349212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.349518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.349533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.349787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.349800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.350034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.350046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.350275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.350287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.350438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.350450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.350736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.350748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.350845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.350857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.326 qpair failed and we were unable to recover it. 00:28:14.326 [2024-07-24 22:15:53.351167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.326 [2024-07-24 22:15:53.351180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.351419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.351431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.351670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.351683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.351968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.351981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.352301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.352314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.352537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.352549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.352775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.352787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.352945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.352958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.353241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.353253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.353421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.353434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.353682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.353695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.353883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.353896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.354074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.354087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.354312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.354324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.354513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.354525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.354834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.354847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.355063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.355076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.355176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.355188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.355490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.355502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.355727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.355740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.356005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.356018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.356255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.356267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.356449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.356461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.356772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.356784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.357006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.357019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.357186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.357198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.357483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.357496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.357713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.357729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.357966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.357979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.358167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.358180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.358486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.358500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.358650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.358663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.358893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.358906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.359143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.359158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.359376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.359389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.359616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.359629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.359815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.359828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.360043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.360056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.360171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.360183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.360418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.360430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.360598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.360611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.360944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.360956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.361187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.361199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.361364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.361377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.361607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.361619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.361860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.361873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.362107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.362119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.362414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.362427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.362645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.362657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.362908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.362920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.363156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.363169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.363412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.363424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.363658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.363671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.363964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.363977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.364232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.364244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.364528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.364540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.364757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.364770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.364868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.364879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.365105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.365117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.365402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.365415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.365674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.365709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.365931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.365950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.366127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.366144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.366398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.366415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.366645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.366662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.366842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.366859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.367104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.367119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.367312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.367324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.367507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.367519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.367735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.367747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.367999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.368012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.368130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.368143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.327 [2024-07-24 22:15:53.368325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.327 [2024-07-24 22:15:53.368338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.327 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.368521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.368535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.368722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.368735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.368961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.368974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.369204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.369216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.369397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.369410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.369648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.369661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.369890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.369903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.370135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.370148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.370331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.370343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.370526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.370539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.370702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.370721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.371006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.371019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.371305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.371317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.371602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.371614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.371874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.371887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.372040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.372052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.372287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.372300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.372607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.372620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.372904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.372917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.373175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.373188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.373418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.373430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.373600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.373612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.373944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.373957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.374117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.374129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.374293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.374306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.374483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.374495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.374729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.374742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.374754] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:28:14.328 [2024-07-24 22:15:53.374803] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:14.328 [2024-07-24 22:15:53.374979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.374993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.375228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.375239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.375419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.375431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.375648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.375660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.375820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.375833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.376049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.376060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.376288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.376299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.376464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.376475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.376776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.376787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.377016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.377027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.377246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.377258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.377567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.377578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.377809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.377823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.378065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.378077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.378315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.378326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.378557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.378569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.378854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.378866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.379153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.379164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.379449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.379461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.379680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.379691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.379959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.379971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.380207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.380220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.380526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.380537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.380760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.380772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.380988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.381000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.381247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.381258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.381558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.381569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.381746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.381758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.381933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.381945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.382194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.382206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.382434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.382446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.382734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.382746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.382930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.382942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.383129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.383141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.383323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.383334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.383490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.383501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.383736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.383749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.384031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.328 [2024-07-24 22:15:53.384042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.328 qpair failed and we were unable to recover it. 00:28:14.328 [2024-07-24 22:15:53.384197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.384209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.384533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.384545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.384706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.384725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.385011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.385023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.385257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.385269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.385513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.385525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.385649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.385661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.385879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.385891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.386110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.386122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.386367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.386379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.386614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.386626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.386796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.386809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.387103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.387115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.387282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.387294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.387550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.387564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.387781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.387793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.387960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.387973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.388281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.388293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.388579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.388591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.388751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.388764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.389000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.389013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.389237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.389249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.389534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.389547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.389836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.389848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.390139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.390152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.390377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.390389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.390625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.390637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.390888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.390900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.391078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.391090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.391320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.391332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.391628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.391640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.391807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.391819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.392115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.392127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.392227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.392239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.392477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.392489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.392821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.392834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.393049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.393061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.393213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.393226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.393378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.393390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.393666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.393679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.393939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.393952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.394185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.394198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.394290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.394303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.394590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.394602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.394854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.394866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.395152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.395164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.395450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.395463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.395682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.395694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.395865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.395877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.396041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.396053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.396336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.396349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.396657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.396669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.396909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.396922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.397158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.397170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.397353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.397367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.397583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.397595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.397886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.397898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.398073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.398085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.398337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.398349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.398436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.398447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.398622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.398635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.398868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.398881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.399131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.399144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.399294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.399306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.399614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.399626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.399775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.399788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.399951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.399964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.400247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.400259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.400588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.400600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.400756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.400769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.400925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.329 [2024-07-24 22:15:53.400938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.329 qpair failed and we were unable to recover it. 00:28:14.329 [2024-07-24 22:15:53.401042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.401054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.401234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.401247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.401437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.401450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.401613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.401626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.401805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.401818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.402104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.402116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.402372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.402384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.402555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.402568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.402720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.402733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.402985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.402998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.403284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.403296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.403485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.403497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.403730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.403742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.403904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.403917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.404100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.404112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.404267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.404280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.404524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.404537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.404722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.404735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.405021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.405034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.405336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.405348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.405587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.405600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.405818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.405831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.406020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.406032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.406278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.406292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.406466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.406478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.406657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.406669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.406893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.406906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.407070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.407083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.407306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.407319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.407544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.407556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.407725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.407738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.408031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.408043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.408229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.408242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.408394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.408407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.408721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.408734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.408976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.408988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.409209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.409221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.409380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.409392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.409743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.409756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.409990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.410003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.410245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.410257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.410495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.410508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.410807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.410819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.410976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.410989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.411220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.411233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.411465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.411477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.411710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.411734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.411901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.411914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.412070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.412082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.412325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.412337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 EAL: No free 2048 kB hugepages reported on node 1 00:28:14.330 [2024-07-24 22:15:53.412673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.412686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.412921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.412934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.413087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.413100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.413269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.413281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.413564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.413576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.413738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.413750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.414036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.414048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.414300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.414313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.414542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.414554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.414726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.414739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.414889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.414901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.415187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.415199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.415371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.415383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.415494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.415508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.330 [2024-07-24 22:15:53.415751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.330 [2024-07-24 22:15:53.415765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.330 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.415984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.415996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.416280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.416292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.416462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.416474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.416630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.416645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.416930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.416943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.417257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.417269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.417434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.417446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.417521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.417532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.417814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.417827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.418093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.418105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.418343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.418354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.418660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.418673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.418910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.418922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.419240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.419252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.419470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.419482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.419773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.419785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.420129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.420140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.420367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.420379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.420633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.420645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.420957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.420970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.421214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.421226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.421400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.421412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.421631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.421643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.421812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.421825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.422072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.422084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.422264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.422276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.422515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.422527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.422755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.422768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.423022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.423034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.423290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.423302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.423609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.423621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.423869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.423881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.424036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.424048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.424282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.424294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.424459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.424471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.424688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.424701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.425050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.425063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.425299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.425311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.425631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.425645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.425796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.425809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.426140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.426152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.426324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.426336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.426504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.426516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.426608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.426620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.426864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.426877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.427096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.427108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.427211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.427223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.427457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.427469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.427707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.427724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.427980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.427992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.428298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.428311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.428491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.428503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.428764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.428777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.428942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.428954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.429194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.429206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.429371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.429384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.429606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.429618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.429793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.429805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.429982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.429994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.430163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.430175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.430397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.430409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.430598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.430611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.430848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.430861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.431158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.431170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.431388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.431401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.431632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.431644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.431802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.431814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.432119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.432131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.432288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.432300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.432517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.432529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.432703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.331 [2024-07-24 22:15:53.432719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.331 qpair failed and we were unable to recover it. 00:28:14.331 [2024-07-24 22:15:53.432941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.432953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.433245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.433257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.433418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.433430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.433598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.433611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.433855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.433868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.434033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.434045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.434354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.434366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.434595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.434609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.434762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.434775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.435015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.435028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.435196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.435209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.435443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.435455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.435610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.435623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.435718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.435730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.435957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.435972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.436080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.436093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.436276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.436288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.436508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.436520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.436669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.436682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.436851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.436863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.437140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.437152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.437308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.437320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.437499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.437511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.437797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.437810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.438100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.438113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.438268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.438280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.438426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.438439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.438663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.438675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.438895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.438907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.439130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.439142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.439361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.439374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.439616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.439628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.439883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.439895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.440044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.440057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.440213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.440225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.440405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.440417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.440635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.440648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.440877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.440889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.441136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.441148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.441301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.441313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.441597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.441609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.441837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.441849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.442081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.442093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.442200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.442212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.442390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.442403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.442663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.442675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.442826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.442838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.443078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.443092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.443266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.443279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.443449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.443461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.443578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.443591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.443702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.443718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.443877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.443888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.444107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.444119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.444283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.444295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.444520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.444532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.444784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.444796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.444961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.444974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.445117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.445129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.445443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.445456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.445561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.445574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.445747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.445760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.446044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.446057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.446207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.446219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.446472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.446484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.446648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.332 [2024-07-24 22:15:53.446661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.332 qpair failed and we were unable to recover it. 00:28:14.332 [2024-07-24 22:15:53.446902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.446914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.447135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.447148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.447370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.447382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.447672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.447684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.447813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.447825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.448043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.448055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.448207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.448219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.448452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.448464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.448625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.448637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.448809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.448822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.449050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.449062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.449371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.449383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.449633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.449645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.449879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.449892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.450064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.450076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.450297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.450309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.450616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.450629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.450784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.450797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.451042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.451054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.451158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.451171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.451330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.451343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.451496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.451510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.451663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.451675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.451894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.451907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.452060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.452073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.452310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.452322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.452494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.452506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.452801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.452813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.453047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.453059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.453226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.453238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.453515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.453527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.453700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.453712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.453894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.453907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.454125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.454138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.454431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.454444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.454604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.454616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.454828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.454841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.455084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.455096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.455249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.455262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.455568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.455581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.455731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.455743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.456048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.456061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.456287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.456299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.456538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.456550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.456726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.456739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.456981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.456993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.457213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.457225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.457516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.457529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.457705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.457721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.457883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.457895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.458116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.458129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.458301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.458314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.458543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.458556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.458701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.458723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.458853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.458866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.459018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.459030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.459370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.459383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.459711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.459735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.459976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.459988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.460210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.460222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.460536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.460549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.460834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.460850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.461067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.461080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.461267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.461280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.461467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.461480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.461652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.461664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.461767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.461779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.461937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.461949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.333 [2024-07-24 22:15:53.462183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.333 [2024-07-24 22:15:53.462195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.333 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.462357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.462370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.462542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.462554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.462791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.462804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.462985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.462998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.463161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.463173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.463397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.463410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.463623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.463636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.463802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.463815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.464054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.464066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.464326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.464338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.464574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.464586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.464823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.464836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.465067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.465080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.465362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.465375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.465502] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:14.334 [2024-07-24 22:15:53.465599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.465613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.465740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.465753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.465986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.465999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.466235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.466247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.466463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.466476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.466647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.466658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.466883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.466896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.467159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.467172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.467339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.467351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.467578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.467591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.467763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.467777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.467876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.467889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.468047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.468059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.468287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.468300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.468522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.468534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.468684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.468697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.468859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.468873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.469089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.469103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.469379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.469393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.469561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.469574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.469860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.469873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.470116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.470128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.470276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.470289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.470473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.470485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.470648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.470661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.470833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.470847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.471133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.471146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.471444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.471457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.471701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.471717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.471867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.471879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.472066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.472079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.472315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.472330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.472636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.472649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.472802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.472815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.473038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.473051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.473220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.473232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.473397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.473409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.473576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.473589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.473753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.473766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.474027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.474040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.474204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.474218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.474389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.474402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.474586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.474598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.474760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.474773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.475007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.475020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.475310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.475324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.475560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.475573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.475811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.475824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.476006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.476018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.476177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.476189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.476358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.476371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.476589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.476601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.334 [2024-07-24 22:15:53.476820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.334 [2024-07-24 22:15:53.476833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.334 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.477048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.477060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.477208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.477221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.477382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.477394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.477679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.477691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.477949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.477962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.478131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.478143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.478426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.478439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.478739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.478752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.478922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.478935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.479168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.479180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.479345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.479357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.479591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.479604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.479823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.479835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.480143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.480156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.480320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.480332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.480500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.480512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.480678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.480691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.480929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.480942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.481161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.481175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.481467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.481479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.481647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.481660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.481945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.481958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.482050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.482062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.482230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.482243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.482339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.482351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.482666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.482678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.482839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.482852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.482968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.482981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.483208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.483220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.483521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.483533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.483765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.483778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.484065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.484077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.484301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.484314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.484412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.484424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.484640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.484653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.484822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.484834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.485002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.485015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.485233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.485244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.485344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.485356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.485595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.485608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.485764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.485776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.485940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.485953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.486174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.486186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.486471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.486483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.486709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.486726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.487014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.487053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d6c000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.487323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.487361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.487631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.487650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.487817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.487835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.488085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.488102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.488361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.488378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.488682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.488698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.488870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.488889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d6c000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.489123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.489141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d6c000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.489457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.489470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.489721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.489733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.489891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.489902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.490086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.490098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.490352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.490364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.490523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.490535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.490851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.490863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.491015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.491028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.491245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.491258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.491544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.491556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.491732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.491744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.492033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.335 [2024-07-24 22:15:53.492045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.335 qpair failed and we were unable to recover it. 00:28:14.335 [2024-07-24 22:15:53.492304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.492316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.492483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.492495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.492661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.492673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.492996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.493008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.493259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.493271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.493489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.493501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.493735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.493747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.494052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.494064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.494395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.494407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.494506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.494517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.494733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.494746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.494969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.494981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.495218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.495230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.495462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.495474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.495759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.495771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.496012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.496024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.496253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.496265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.496431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.496443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.496727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.496740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.496910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.496924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.497237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.497249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.497507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.497520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.497610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.497622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.497780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.497793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.497939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.497952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.498098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.498110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.498213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.498224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.498562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.498575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.498859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.498873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.499209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.499222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.499514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.499529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.499863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.499879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.500049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.500062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.500228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.500240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.500460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.500473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.500759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.500776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.501087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.501102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.501337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.501351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.501574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.501587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.501803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.501818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.501992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.502004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.502226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.502240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.502459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.502472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.502721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.502736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.503032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.503049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.503301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.503314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.503552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.503566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.503800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.503814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.503981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.503994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.504277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.504292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.504582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.504599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.504835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.504849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.505013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.505026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.505249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.505262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.505573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.505587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.505790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.505804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.505990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.506003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.506288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.506302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.506468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.506481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.506656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.506674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.506914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.506927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.507147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.507161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.507337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.507350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.507700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.507719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.507976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.507989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.508157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.508170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.508335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.508347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.508568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.508580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.508843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.508856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.509189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.509201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.336 qpair failed and we were unable to recover it. 00:28:14.336 [2024-07-24 22:15:53.509354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.336 [2024-07-24 22:15:53.509366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.337 qpair failed and we were unable to recover it. 00:28:14.337 [2024-07-24 22:15:53.509582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.337 [2024-07-24 22:15:53.509594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.337 qpair failed and we were unable to recover it. 00:28:14.337 [2024-07-24 22:15:53.509739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.337 [2024-07-24 22:15:53.509752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.337 qpair failed and we were unable to recover it. 00:28:14.337 [2024-07-24 22:15:53.509928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.337 [2024-07-24 22:15:53.509940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.337 qpair failed and we were unable to recover it. 00:28:14.337 [2024-07-24 22:15:53.510107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.337 [2024-07-24 22:15:53.510119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.337 qpair failed and we were unable to recover it. 00:28:14.337 [2024-07-24 22:15:53.510337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.337 [2024-07-24 22:15:53.510349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.337 qpair failed and we were unable to recover it. 00:28:14.337 [2024-07-24 22:15:53.510521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.337 [2024-07-24 22:15:53.510533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.337 qpair failed and we were unable to recover it. 00:28:14.337 [2024-07-24 22:15:53.510862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.337 [2024-07-24 22:15:53.510874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.337 qpair failed and we were unable to recover it. 00:28:14.612 [2024-07-24 22:15:53.511160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-07-24 22:15:53.511172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-07-24 22:15:53.511421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-07-24 22:15:53.511435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-07-24 22:15:53.511667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-07-24 22:15:53.511680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-07-24 22:15:53.511833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-07-24 22:15:53.511846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-07-24 22:15:53.512014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-07-24 22:15:53.512025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-07-24 22:15:53.512260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-07-24 22:15:53.512273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-07-24 22:15:53.512581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-07-24 22:15:53.512593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-07-24 22:15:53.512825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-07-24 22:15:53.512838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-07-24 22:15:53.512996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-07-24 22:15:53.513008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-07-24 22:15:53.513159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-07-24 22:15:53.513171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-07-24 22:15:53.513453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-07-24 22:15:53.513465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-07-24 22:15:53.513700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-07-24 22:15:53.513712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-07-24 22:15:53.513894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-07-24 22:15:53.513906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-07-24 22:15:53.514189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-07-24 22:15:53.514201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-07-24 22:15:53.514442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-07-24 22:15:53.514454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-07-24 22:15:53.514605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-07-24 22:15:53.514617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-07-24 22:15:53.514777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-07-24 22:15:53.514789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-07-24 22:15:53.515022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-07-24 22:15:53.515034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-07-24 22:15:53.515273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-07-24 22:15:53.515285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-07-24 22:15:53.515568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-07-24 22:15:53.515580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-07-24 22:15:53.515870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-07-24 22:15:53.515882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-07-24 22:15:53.516066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-07-24 22:15:53.516080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-07-24 22:15:53.516307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-07-24 22:15:53.516319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-07-24 22:15:53.516486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-07-24 22:15:53.516498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-07-24 22:15:53.516649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-07-24 22:15:53.516661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-07-24 22:15:53.516970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-07-24 22:15:53.516982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-07-24 22:15:53.517161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-07-24 22:15:53.517173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-07-24 22:15:53.517341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-07-24 22:15:53.517353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-07-24 22:15:53.517593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-07-24 22:15:53.517605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-07-24 22:15:53.517893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-07-24 22:15:53.517906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-07-24 22:15:53.518126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-07-24 22:15:53.518137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-07-24 22:15:53.518376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-07-24 22:15:53.518388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.518686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.518698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.518883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.518895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.519122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.519134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.519294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.519307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.519601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.519613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.519912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.519924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.520167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.520179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.520415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.520426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.520640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.520652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.520919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.520931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.521168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.521180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.521425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.521437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.521680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.521692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.521933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.521945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.522094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.522105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.522342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.522354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.522586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.522598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.522768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.522780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.522901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.522913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.523213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.523224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.523485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.523497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.523728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.523740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.523845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.523857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.524142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.524154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.524336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.524348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.524594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.524605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.524898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.524910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.525142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.525154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.525382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.525394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.525678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.525692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.526005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.526017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.526201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.526212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.526449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.526461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.526693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.526705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.526925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.526937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.527172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.527185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.527408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.527420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.527676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.527688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.527901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.527913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.528145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.528157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.528443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.528455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.528722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.528735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.529069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.529081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.529312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.529324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.529479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.529491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.529678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.529690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.529924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.529936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.530173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.530185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.530403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.530415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.530611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.530622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.530774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.530786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.531009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.531021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.531333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.531349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.531654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.531666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.531854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.531867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.531971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.531982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.532216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.532228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.532459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.532471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.532702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.532718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.532935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.532947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.533115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.533127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.533400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.533412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.533581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.533594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.533828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.533841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.534156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.534168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.534388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.534400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.534651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.534663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-07-24 22:15:53.534908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-07-24 22:15:53.534920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.535072] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:14.614 [2024-07-24 22:15:53.535089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.535101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 [2024-07-24 22:15:53.535103] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.535116] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:14.614 [2024-07-24 22:15:53.535125] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:14.614 [2024-07-24 22:15:53.535132] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:14.614 [2024-07-24 22:15:53.535250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:28:14.614 [2024-07-24 22:15:53.535417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.535430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.535359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:28:14.614 [2024-07-24 22:15:53.535446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:28:14.614 [2024-07-24 22:15:53.535445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:14.614 [2024-07-24 22:15:53.535683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.535695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.535912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.535925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.536154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.536166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.536383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.536395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.536577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.536589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.536895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.536908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.537080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.537092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.537259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.537270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.537489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.537501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.537799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.537815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.538066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.538078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.538337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.538349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.538565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.538577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.538886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.538898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.539154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.539167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.539409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.539421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.539729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.539742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.539996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.540009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.540316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.540329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.540506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.540518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.540823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.540836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.541146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.541158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.541444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.541456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.541747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.541759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.541991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.542003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.542225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.542238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.542529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.542541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.542725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.542737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.542972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.542984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.543220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.543233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.543452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.543464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.543756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.543769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.543990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.544002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.544286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.544299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.544534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.544546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.544766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.544779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.545012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.545025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.545308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.545321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.545627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.545640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.545788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.545801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.546053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.546065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.546229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.546242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.546525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.546538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.546635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.546647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.546879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.546892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.547116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.547129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.547369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.547382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.547603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.547616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.547844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.547858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.548073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.548087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.548392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.548406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.548573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.548585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.548760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.548773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.548876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.548889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.549113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.549126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.549361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.549374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.549619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.549632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.549850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.549863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.550115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.550128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.550308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.550322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.550561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.550574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.550792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-07-24 22:15:53.550805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-07-24 22:15:53.551128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.551143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.551377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.551391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.551651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.551665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.551903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.551917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.552156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.552168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.552413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.552426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.552737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.552752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.552937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.552951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.553186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.553198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.553424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.553437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.553656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.553669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.553961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.553976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.554270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.554284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.554591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.554605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.554913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.554931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.555084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.555097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.555331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.555344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.555505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.555518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.555817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.555830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.556062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.556076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.556306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.556320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.556479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.556492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.556780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.556794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.557030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.557042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.557353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.557367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.557604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.557616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.557914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.557928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.558218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.558230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.558465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.558478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.558641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.558654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.558940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.558954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.559187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.559200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.559531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.559543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.559721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.559734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.560034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.560046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.560279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.560291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.560546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.560559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.560868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.560882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.561102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.561115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.561339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.561352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.561591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.561604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.561780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.561792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.562034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.562047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.562266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.562278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.562493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.562506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.562757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.562770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.563076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.563088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.563375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.563387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.563614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.563626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.563979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.563991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.564299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.564311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.564604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.564618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.564944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.564958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.565244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.565256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.565490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.565504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.565816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.565829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.566061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.566073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.566320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.566332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.566548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.566561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.566729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-07-24 22:15:53.566742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-07-24 22:15:53.566973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.566986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.567224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.567237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.567449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.567461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.567692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.567704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.567951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.567964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.568195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.568207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.568518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.568531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.568774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.568788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.568959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.568971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.569144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.569156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.569405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.569418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.569647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.569659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.569894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.569907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.570153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.570166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.570322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.570335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.570633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.570646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.570822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.570834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.571051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.571064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.571293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.571305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.571561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.571573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.571860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.571872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.572182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.572194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.572411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.572423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.572641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.572653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.572772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.572784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.572950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.572962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.573175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.573188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.573413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.573425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.573642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.573654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.573885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.573897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.574126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.574138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.574396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.574408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.574634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.574646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.574955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.574968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.575277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.575293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.575549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.575561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.575889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.575902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.576084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.576096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.576315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.576328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.576566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.576579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.576815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.576827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.577142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.577155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.577388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.577401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.577640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.577653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.577826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.577838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.578063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.578076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.578290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.578303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.578490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.578502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.578724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.578738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.579049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.579061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.579289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.579301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.579461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.579473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.579727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.579741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.580057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.580070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.580185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.580197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.580515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.580529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.580759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.580772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.580991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.581003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.581287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.581300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.581585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.581599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.581784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.581797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.582035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.582047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.582279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.582293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.582460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.582472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.582805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-07-24 22:15:53.582817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-07-24 22:15:53.583067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.583080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.583251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.583264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.583501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.583513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.583813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.583825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.584111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.584124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.584222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.584234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.584518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.584531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.584768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.584781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.585011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.585024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.585280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.585294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.585531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.585543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.585769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.585782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.586115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.586127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.586348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.586360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.586673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.586686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.586937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.586950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.587189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.587202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.587436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.587448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.587623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.587636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.587945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.587958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.588246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.588259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.588486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.588499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.588729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.588742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.589051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.589064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.589294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.589306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.589593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.589606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.589844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.589857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.590088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.590100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.590385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.590397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.590632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.590644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.590977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.590989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.591222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.591234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.591392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.591404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.591636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.591648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.591905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.591917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.592152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.592164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.592449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.592461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.592692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.592704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.592871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.592883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.593073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.593085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.593315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.593327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.593578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.593590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.593742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.593754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.594090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.594102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.594287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.594299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.594517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.594529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.594794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.594806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.595047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.595059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.595370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.595383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.595640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.595654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.595956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.595968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.596226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.596238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.596498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.596510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.596752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.596764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.597008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.597020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.597189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.597201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.597440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.597452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.597688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.597700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.597961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.597973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.598142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.598154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-07-24 22:15:53.598371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-07-24 22:15:53.598383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.598565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.598577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.598733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.598745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.598995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.599007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.599157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.599169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.599386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.599398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.599730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.599742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.600052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.600063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.600382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.600394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.600697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.600709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.600932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.600944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.601162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.601174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.601389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.601400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.601689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.601701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.601855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.601867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.602103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.602114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.602425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.602437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.602693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.602705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.602809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.602821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.602988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.603000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.603287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.603299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.603449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.603461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.603570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.603582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.603915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.603927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.604234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.604247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.604346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.604358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.604584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.604596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.604849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.604862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.605046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.605058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.605365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.605379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.605633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.605645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.605758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.605770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.606075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.606087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.606266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.606278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.606588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.606600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.606928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.606941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.607126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.607138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.607475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.607487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.607736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.607748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.608049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.608061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.608299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.608311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.608460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.608472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.608717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.608729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.609042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.609054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.609225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.609236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.609523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.609534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.609763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.609776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.610077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.610089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.610327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.610339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.610505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.610517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.610779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.610791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.611113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.611125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.611380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.611392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.611650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.611662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.611839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.611852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.612068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.612080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.612317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.612330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.612512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.612524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.612766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.612778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.613009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.613021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.613305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.613317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.613485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.613497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.613794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.613807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.613982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.613994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.614252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.614264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.614485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.614497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.614681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.614693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.614928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.614941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.615228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.615240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.615549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.615563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.615873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-07-24 22:15:53.615886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-07-24 22:15:53.616047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.616059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.616388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.616400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.616634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.616646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.616866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.616878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.617026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.617038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.617256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.617268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.617489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.617501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.617730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.617742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.617926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.617939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.618225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.618237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.618545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.618557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.618785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.618797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.619106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.619119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.619347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.619359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.619533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.619545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.619833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.619845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.620099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.620110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.620348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.620360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.620659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.620671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.620905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.620917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.621154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.621167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.621470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.621482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.621711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.621726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.621944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.621956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.622175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.622186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.622344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.622356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.622588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.622600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.622842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.622854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.623091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.623103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.623391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.623403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.623507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.623519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.623811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.623823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.624116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.624129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.624413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.624424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.624653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.624665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.624884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.624896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.625185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.625197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.625491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.625502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.625724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.625738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.625958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.625970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.626278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.626290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.626478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.626489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.626639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.626651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.626884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.626896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.627173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.627185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.627352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.627364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.627534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.627546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.627831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.627843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.628060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.628072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.628309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.628321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.628633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.628645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.628875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.628887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.629180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.629191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.629302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.629314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.629562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.629574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.629743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.629756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.629932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.629944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.630183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.630195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.630429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.630441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.630777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.630789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.630958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.630970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.631276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.631288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.631574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.631586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.631811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.631823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.632106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-07-24 22:15:53.632118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-07-24 22:15:53.632427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.632440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.632613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.632625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.632843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.632855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.633157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.633169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.633326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.633338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.633551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.633563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.633729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.633742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.633912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.633924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.634207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.634219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.634503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.634515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.634823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.634835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.634981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.634994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.635253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.635265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.635581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.635595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.635908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.635921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.636157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.636169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.636387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.636399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.636630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.636642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.636753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.636765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.637065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.637078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.637384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.637396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.637628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.637640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.637937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.637949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.638162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.638174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.638405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.638417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.638513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.638525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.638690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.638703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.638855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.638868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.639120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.639131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.639299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.639311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.639619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.639631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.639865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.639877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.640163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.640175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.640425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.640438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.640734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.640746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.640910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.640922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.641149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.641161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.641419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.641430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.641712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.641727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.641959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.641971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.642260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.642272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.642532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.642544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.642766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.642779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.642960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.642972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.643205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.643217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.643475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.643487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.643654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.643666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.643971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.643983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.644220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.644232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.644543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.644555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.644807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.644819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.645060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.645072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.645287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.645299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.645462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.645476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.645787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.645799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.646032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.646045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.646300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.646312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.646622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.646634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.646866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.646879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.647161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.647173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.647466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-07-24 22:15:53.647478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-07-24 22:15:53.647708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.647724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.647942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.647954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.648264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.648276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.648506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.648517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.648618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.648629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.648935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.648948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.649294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.649306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.649553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.649565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.649656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.649668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.649911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.649924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.650293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.650305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.650591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.650603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.650936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.650948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.651176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.651188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.651500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.651512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.651817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.651829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.652079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.652090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.652316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.652328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.652584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.652596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.652809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.652849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.653199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.653217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.653513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.653530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.653833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.653852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.654150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.654167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.654355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.654372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.654531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.654545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.654877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.654889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.655046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.655058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.655309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.655321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.655490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.655502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.655756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.655768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.656006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.656018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.656327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.656339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.656560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.656571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.656761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.656773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.657006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.657018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.657188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.657201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.657383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.657395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.657560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.657572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.657810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.657823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.657998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.658010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.658311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.658323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.658595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.658607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.658841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.658853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.659020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.659032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.659252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.659264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.659432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.659444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.659556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.659568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.659799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.659811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.660028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.660040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.660376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.660388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.660623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.660634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.660872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.660884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.661193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.661205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.661441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.661453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.661729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.661742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.662027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.662040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.662269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.662281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.662562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.662574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.662823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.662837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.663023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.663035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.663364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.663376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.663664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.663676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.663838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.663850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.664135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.664147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.664458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.664470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.664685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.664697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.664928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-07-24 22:15:53.664940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-07-24 22:15:53.665169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.665181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.665396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.665408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.665692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.665704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.665973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.665985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.666199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.666211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.666447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.666459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.666704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.666727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.667023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.667035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.667215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.667227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.667456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.667468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.667777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.667789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.668135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.668147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.668456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.668468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.668694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.668707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.668946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.668958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.669200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.669212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.669429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.669441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.669697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.669709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.669939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.669951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.670238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.670249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.670557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.670568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.670820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.670832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.671163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.671175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.671335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.671347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.671643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.671655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.671894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.671906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.672071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.672083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.672237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.672250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.672500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.672512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.672818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.672831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.673125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.673137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.673317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.673331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.673590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.673602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.673911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.673923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.674091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.674103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.674316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.674328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.674564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.674577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.674671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.674683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.674915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.674928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.675202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.675214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.675450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.675462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.675701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.675713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.675967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.675979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.676233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.676245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.676358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.676370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.676543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.676555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.676791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.676804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.676955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.676967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.677160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.677172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.677274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.677286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.677532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.677544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.677790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.677802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.678020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.678032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.678197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.678209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.678428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.678440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.678671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.678683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.679010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.679022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.679321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.679333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.679551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.679564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.679863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.679875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.680093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.680105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.680365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.680377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.680463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.680474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-07-24 22:15:53.680774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-07-24 22:15:53.680786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.680983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.680995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.681162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.681174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.681434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.681446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.681607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.681618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.681880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.681892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.682066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.682078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.682235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.682247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.682474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.682488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.682636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.682648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.682812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.682825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.683044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.683056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.683280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.683292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.683534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.683545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.683859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.683871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.684188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.684200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.684436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.684448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.684757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.684769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.684934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.684946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.685229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.685241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.685488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.685500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.685725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.685738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.686027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.686040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.686201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.686212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.686456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.686468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.686684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.686696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.686918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.686930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.687168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.687179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.687493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.687504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.687806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.687818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.688127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.688139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.688386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.688398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.688718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.688730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.688949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.688961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.689191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.689203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.689431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.689444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.689744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.689757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.690041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.690053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.690285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.690297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.690457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.690469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.690637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.690649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.690799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.690811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.691097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.691109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.691348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.691359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.691668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.691680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.691854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.691866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.692085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.692097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.692350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.692362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.692591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.692604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.692781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.692793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.693030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.693042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.693259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.693271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.693510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.693522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.693753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.693765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.694033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.694045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.694220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.694232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.694483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.694495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.694657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.694668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.694974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.694986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.695247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.695259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.695477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.695489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.695802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.695814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.696049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.696062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.696373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.696385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.696696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.696708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.696889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.696901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-07-24 22:15:53.697188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-07-24 22:15:53.697200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.697510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.697522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.697680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.697692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.697845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.697857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.698111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.698123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.698373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.698385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.698679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.698692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.698988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.699000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.699234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.699246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.699411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.699424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.699574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.699586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.699819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.699832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.700086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.700098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.700366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.700378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.700626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.700638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.700867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.700880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.701097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.701109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.701197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.701210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.701445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.701457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.701750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.701762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.701989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.702001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.702233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.702244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.702459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.702474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.702704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.702721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.702974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.702986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.703221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.703233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.703520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.703532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.703779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.703792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.703958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.703970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.704122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.704134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.704396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.704408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.704574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.704586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.704831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.704843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.705103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.705115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.705284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.705296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.705461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.705473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.705787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.705799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.706043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.706055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.706278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.706290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.706473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.706485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.706796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.706808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.707043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.707055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.707351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.707363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.707658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.707670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.707830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.707842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.708097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.708109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.708409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.708421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.708611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.708623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.708842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.708855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.709162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.709174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.709442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.709454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.709628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.709641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.709876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.709888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.710047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.710060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.710345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.710357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.710593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.710605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.710763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.710776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.710943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.710955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.711112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.711124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.711344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.711356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.711645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.711657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.711901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.711913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.712095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.712109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.712271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.712283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.712542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.712553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.712840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.712853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.713080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.713092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.713341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.713353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.713582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-07-24 22:15:53.713594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-07-24 22:15:53.713826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.713838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.714055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.714067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.714234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.714246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.714477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.714489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.714788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.714801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.715110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.715122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.715406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.715418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.715732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.715745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.715983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.715995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.716245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.716257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.716473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.716485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.716707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.716724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.717041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.717053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.717232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.717244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.717462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.717474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.717704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.717719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.718030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.718042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.718203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.718214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.718389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.718401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.718710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.718726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.718879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.718891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.719130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.719142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.719394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.719406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.719646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.719658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.719990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.720002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.720247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.720259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.720494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.720506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.720817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.720828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.721014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.721025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.721194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.721206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.721511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.721523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.721751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.721763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.722072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.722084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.722414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.722428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.722669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.722681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.722989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.723001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.723222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.723234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.723542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.723553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.723838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.723851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.724203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.724215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.724503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.724515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.724751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.724763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.724982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.724994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.725157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.725168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.725271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.725283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.725590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.725602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.725838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.725850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.726086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.726099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.726258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.726270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.726500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.726512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.726822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.726835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.727081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.727093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.727257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.727269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.727554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.727567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.727784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.727796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.728079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.728091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.728332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.728345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.728507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.728519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.728781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.728793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.729026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.729038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.729256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.729268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-07-24 22:15:53.729575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-07-24 22:15:53.729587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.729836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.729848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.730075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.730087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.730375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.730387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.730631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.730642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.730938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.730951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.731169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.731181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.731415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.731427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.731642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.731654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.731941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.731953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.732212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.732224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.732460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.732472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.732639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.732653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.732838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.732850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.733134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.733146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.733386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.733398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.733641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.733653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.733873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.733885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.734060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.734072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.734246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.734258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.734499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.734511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.734731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.734743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.734999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.735011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.735191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.735204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.735376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.735388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.735559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.735571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.735817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.735829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.736058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.736071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.736222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.736234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.736450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.736462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.736785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.736797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.737011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.737023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.737283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.737295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.737522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.737534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.737699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.737712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.737955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.737967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.738250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.738262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.738548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.738560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.738736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.738748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.739013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.739025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.739332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.739344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.739582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.739594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.739762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.739775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.740081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.740093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.740355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.740367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.740552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.740563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.740807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.740819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.741057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.741069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.741240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.741252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.741558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.741570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.741881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.741894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.742108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.742121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.742356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.742370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.742697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.742709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.743042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.743054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.743357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.743369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.743618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.743630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.743924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.743936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.744224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.744237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.744563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.744575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.744882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.744894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.745086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.745098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.745382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.745394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.745678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.745690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.745791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.745803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.746053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.746065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.746250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.746262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.626 [2024-07-24 22:15:53.746569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.626 [2024-07-24 22:15:53.746582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.626 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.746887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.746900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.747185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.747197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.747483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.747495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.747712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.747728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.747894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.747906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.748124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.748136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.748279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.748291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.748574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.748585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.748898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.748910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.749079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.749091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.749323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.749335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.749488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.749500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.749798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.749810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.750042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.750054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.750368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.750381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.750666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.750678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.751007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.751019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.751201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.751213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.751474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.751485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.751722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.751735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.752019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.752031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.752216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.752228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.752527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.752539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.752838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.752851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.753178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.753193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.753278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.753290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.753450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.753462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.753688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.753700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.753991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.754003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.754153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.754165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.754401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.754413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.754632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.754644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.755003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.755015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.755328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.755339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.755559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.755571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.755801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.755813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.756103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.756115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.756426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.756438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.756730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.756742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.756894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.756906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.757154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.757166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.757396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.757408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.757625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.757637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.757922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.757934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.758093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.758105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.758412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.758424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.758653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.758665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.758819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.758832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.759075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.759087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.759325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.759336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.759578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.759590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.759839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.759851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.760084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.760096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.760413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.760425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.760731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.760743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.761054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.761066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.761307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.761320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.761488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.761500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.761647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.761659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.761958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.761970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.762277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.762289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.762452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.762464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.762695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.762707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.762926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.762938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.763179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.763193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.763477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.763489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.763773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.763785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.627 [2024-07-24 22:15:53.763953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.627 [2024-07-24 22:15:53.763965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.627 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.764183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.764195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.764526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.764538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.764845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.764858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.765042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.765054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.765285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.765297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.765592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.765604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.765838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.765850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.766160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.766172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.766348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.766359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.766626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.766638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.766908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.766920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.767154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.767166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.767385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.767398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.767561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.767573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.767855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.767867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.768026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.768038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.768341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.768353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.768569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.768581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.768805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.768817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.769102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.769114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.769419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.769431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.769650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.769663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.769881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.769893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.770180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.770192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.770410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.770423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.770643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.770654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.770888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.770900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.771069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.771081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.771389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.771401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.771636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.771649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.771897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.771909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.772138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.772150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.772367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.772378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.772687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.772699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.772921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.772933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.773082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.773094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.773399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.773411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.773722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.773734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.773980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.773992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.774211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.774223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.774387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.774398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.774627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.774639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.774924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.774936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.775152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.775164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.775322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.775334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.775501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.775513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.775699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.775712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.775996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.776009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.776129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.776142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.776379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.776391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.776631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.776643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.776964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.776977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.777260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.777272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.777489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.777501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.777813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.777825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.778051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.778063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.778279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.778291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.778512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.778524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.778749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.778761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.779024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.779035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.779324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.779336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.779641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.779653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.779890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.779903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.780121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.780134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.780305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.780317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.628 [2024-07-24 22:15:53.780550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.628 [2024-07-24 22:15:53.780562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.628 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.780783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.780796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.781107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.781119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.781287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.781299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.781608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.781620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.781848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.781860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.782098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.782110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.782409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.782421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.782683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.782694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.782919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.782931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.783113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.783125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.783346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.783358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.783596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.783609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.783754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.783766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.784000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.784012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.784241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.784253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.784478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.784490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.784722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.784734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.784951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.784962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.785249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.785261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.785591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.785603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.785912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.785924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.786171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.786183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.786425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.786436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.786602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.786614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.786835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.786847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.786937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.786949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.787187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.787198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.787434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.787446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.787662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.787674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.787905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.787917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.788099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.788111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.788407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.788419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.788660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.788672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.788914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.788926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.789163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.789175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.789405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.789417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.789724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.789737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.790020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.790034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.790340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.790351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.790652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.790664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.790951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.790963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.791267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.791279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.791565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.791577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.791877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.791889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.792124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.792136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.792301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.792313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.792483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.792495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.792793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.792805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.793047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.793059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.793209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.793221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.793437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.793449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.793681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.793694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.793860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.793872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.794155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.794167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.794387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.794399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.794613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.794624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.794794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.794807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.795095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.795106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.795325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.795336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.795499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.795511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.795763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.795775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.796030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.796042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.796262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.796274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.796510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.796522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.796676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.629 [2024-07-24 22:15:53.796688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.629 qpair failed and we were unable to recover it. 00:28:14.629 [2024-07-24 22:15:53.796925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.630 [2024-07-24 22:15:53.796937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.630 qpair failed and we were unable to recover it. 00:28:14.630 [2024-07-24 22:15:53.797153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.630 [2024-07-24 22:15:53.797166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.630 qpair failed and we were unable to recover it. 00:28:14.630 [2024-07-24 22:15:53.797336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.630 [2024-07-24 22:15:53.797347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.630 qpair failed and we were unable to recover it. 00:28:14.630 [2024-07-24 22:15:53.797441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.630 [2024-07-24 22:15:53.797453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.630 qpair failed and we were unable to recover it. 00:28:14.630 [2024-07-24 22:15:53.797620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.630 [2024-07-24 22:15:53.797632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.630 qpair failed and we were unable to recover it. 00:28:14.630 [2024-07-24 22:15:53.797966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.630 [2024-07-24 22:15:53.797979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.630 qpair failed and we were unable to recover it. 00:28:14.630 [2024-07-24 22:15:53.798263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.630 [2024-07-24 22:15:53.798274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.630 qpair failed and we were unable to recover it. 00:28:14.630 [2024-07-24 22:15:53.798550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.630 [2024-07-24 22:15:53.798563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.630 qpair failed and we were unable to recover it. 00:28:14.630 [2024-07-24 22:15:53.798868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.630 [2024-07-24 22:15:53.798881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.630 qpair failed and we were unable to recover it. 00:28:14.630 [2024-07-24 22:15:53.799110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.630 [2024-07-24 22:15:53.799122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.630 qpair failed and we were unable to recover it. 00:28:14.630 [2024-07-24 22:15:53.799360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.630 [2024-07-24 22:15:53.799372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.630 qpair failed and we were unable to recover it. 00:28:14.630 [2024-07-24 22:15:53.799592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.630 [2024-07-24 22:15:53.799604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.630 qpair failed and we were unable to recover it. 00:28:14.630 [2024-07-24 22:15:53.799817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.630 [2024-07-24 22:15:53.799831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.630 qpair failed and we were unable to recover it. 00:28:14.630 [2024-07-24 22:15:53.800079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.630 [2024-07-24 22:15:53.800090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.630 qpair failed and we were unable to recover it. 00:28:14.630 [2024-07-24 22:15:53.800332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.630 [2024-07-24 22:15:53.800343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.630 qpair failed and we were unable to recover it. 00:28:14.630 [2024-07-24 22:15:53.800652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.630 [2024-07-24 22:15:53.800664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.630 qpair failed and we were unable to recover it. 00:28:14.630 [2024-07-24 22:15:53.800760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.630 [2024-07-24 22:15:53.800772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.630 qpair failed and we were unable to recover it. 00:28:14.630 [2024-07-24 22:15:53.801055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.630 [2024-07-24 22:15:53.801067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.630 qpair failed and we were unable to recover it. 00:28:14.630 [2024-07-24 22:15:53.801322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.630 [2024-07-24 22:15:53.801334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.630 qpair failed and we were unable to recover it. 00:28:14.630 [2024-07-24 22:15:53.801640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.630 [2024-07-24 22:15:53.801652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.630 qpair failed and we were unable to recover it. 00:28:14.630 [2024-07-24 22:15:53.801961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.630 [2024-07-24 22:15:53.801973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.630 qpair failed and we were unable to recover it. 00:28:14.630 [2024-07-24 22:15:53.802194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.630 [2024-07-24 22:15:53.802206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.630 qpair failed and we were unable to recover it. 00:28:14.630 [2024-07-24 22:15:53.802435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.630 [2024-07-24 22:15:53.802447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.630 qpair failed and we were unable to recover it. 00:28:14.630 [2024-07-24 22:15:53.802660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.630 [2024-07-24 22:15:53.802672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.630 qpair failed and we were unable to recover it. 00:28:14.630 [2024-07-24 22:15:53.802900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.630 [2024-07-24 22:15:53.802912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.630 qpair failed and we were unable to recover it. 00:28:14.630 [2024-07-24 22:15:53.803133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.630 [2024-07-24 22:15:53.803146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.630 qpair failed and we were unable to recover it. 00:28:14.630 [2024-07-24 22:15:53.803478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.630 [2024-07-24 22:15:53.803490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.630 qpair failed and we were unable to recover it. 00:28:14.630 [2024-07-24 22:15:53.803797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.630 [2024-07-24 22:15:53.803809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.630 qpair failed and we were unable to recover it. 00:28:14.630 [2024-07-24 22:15:53.804096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.630 [2024-07-24 22:15:53.804108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.630 qpair failed and we were unable to recover it. 00:28:14.630 [2024-07-24 22:15:53.804397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.630 [2024-07-24 22:15:53.804409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.630 qpair failed and we were unable to recover it. 00:28:14.630 [2024-07-24 22:15:53.804595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.630 [2024-07-24 22:15:53.804607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.630 qpair failed and we were unable to recover it. 00:28:14.630 [2024-07-24 22:15:53.804838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.630 [2024-07-24 22:15:53.804851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.630 qpair failed and we were unable to recover it. 00:28:14.630 [2024-07-24 22:15:53.805016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.630 [2024-07-24 22:15:53.805028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.630 qpair failed and we were unable to recover it. 00:28:14.630 [2024-07-24 22:15:53.805308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.630 [2024-07-24 22:15:53.805320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.630 qpair failed and we were unable to recover it. 00:28:14.630 [2024-07-24 22:15:53.805480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.630 [2024-07-24 22:15:53.805492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.630 qpair failed and we were unable to recover it. 00:28:14.630 [2024-07-24 22:15:53.805666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.630 [2024-07-24 22:15:53.805678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.630 qpair failed and we were unable to recover it. 00:28:14.630 [2024-07-24 22:15:53.805987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.630 [2024-07-24 22:15:53.806000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.630 qpair failed and we were unable to recover it. 00:28:14.630 [2024-07-24 22:15:53.806168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.630 [2024-07-24 22:15:53.806181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.630 qpair failed and we were unable to recover it. 00:28:14.630 [2024-07-24 22:15:53.806476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.630 [2024-07-24 22:15:53.806488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.630 qpair failed and we were unable to recover it. 00:28:14.630 [2024-07-24 22:15:53.806712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.630 [2024-07-24 22:15:53.806728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.630 qpair failed and we were unable to recover it. 00:28:14.630 [2024-07-24 22:15:53.807039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.630 [2024-07-24 22:15:53.807051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.630 qpair failed and we were unable to recover it. 00:28:14.630 [2024-07-24 22:15:53.807280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.630 [2024-07-24 22:15:53.807292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.630 qpair failed and we were unable to recover it. 00:28:14.630 [2024-07-24 22:15:53.807522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.630 [2024-07-24 22:15:53.807534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.630 qpair failed and we were unable to recover it. 00:28:14.630 [2024-07-24 22:15:53.807769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.630 [2024-07-24 22:15:53.807781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.630 qpair failed and we were unable to recover it. 00:28:14.630 [2024-07-24 22:15:53.808041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.630 [2024-07-24 22:15:53.808053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.630 qpair failed and we were unable to recover it. 00:28:14.630 [2024-07-24 22:15:53.808289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.630 [2024-07-24 22:15:53.808301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.630 qpair failed and we were unable to recover it. 00:28:14.630 [2024-07-24 22:15:53.808470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.630 [2024-07-24 22:15:53.808483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.630 qpair failed and we were unable to recover it. 00:28:14.630 [2024-07-24 22:15:53.808794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.630 [2024-07-24 22:15:53.808807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.630 qpair failed and we were unable to recover it. 00:28:14.630 [2024-07-24 22:15:53.809044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.630 [2024-07-24 22:15:53.809056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.630 qpair failed and we were unable to recover it. 00:28:14.908 [2024-07-24 22:15:53.809274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-07-24 22:15:53.809288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-07-24 22:15:53.809513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-07-24 22:15:53.809526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-07-24 22:15:53.809767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-07-24 22:15:53.809780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-07-24 22:15:53.810056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-07-24 22:15:53.810070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-07-24 22:15:53.810302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-07-24 22:15:53.810313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-07-24 22:15:53.810483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-07-24 22:15:53.810495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-07-24 22:15:53.810781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-07-24 22:15:53.810794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-07-24 22:15:53.810978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-07-24 22:15:53.810991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-07-24 22:15:53.811173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-07-24 22:15:53.811185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-07-24 22:15:53.811417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-07-24 22:15:53.811429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-07-24 22:15:53.811652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-07-24 22:15:53.811664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-07-24 22:15:53.811862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-07-24 22:15:53.811875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-07-24 22:15:53.812095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-07-24 22:15:53.812108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-07-24 22:15:53.812197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-07-24 22:15:53.812209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-07-24 22:15:53.812421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-07-24 22:15:53.812434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-07-24 22:15:53.812654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-07-24 22:15:53.812666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-07-24 22:15:53.813010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-07-24 22:15:53.813022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-07-24 22:15:53.813255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-07-24 22:15:53.813267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.909 [2024-07-24 22:15:53.813437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-07-24 22:15:53.813449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-07-24 22:15:53.813699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-07-24 22:15:53.813711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-07-24 22:15:53.813960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-07-24 22:15:53.813972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-07-24 22:15:53.814219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-07-24 22:15:53.814231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-07-24 22:15:53.814476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-07-24 22:15:53.814488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-07-24 22:15:53.814662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-07-24 22:15:53.814674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-07-24 22:15:53.814905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-07-24 22:15:53.814917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-07-24 22:15:53.815201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-07-24 22:15:53.815213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-07-24 22:15:53.815398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-07-24 22:15:53.815410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-07-24 22:15:53.815576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-07-24 22:15:53.815588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-07-24 22:15:53.815843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-07-24 22:15:53.815856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-07-24 22:15:53.816070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-07-24 22:15:53.816082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-07-24 22:15:53.816301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-07-24 22:15:53.816313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-07-24 22:15:53.816597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-07-24 22:15:53.816609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-07-24 22:15:53.816896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-07-24 22:15:53.816908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-07-24 22:15:53.817201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-07-24 22:15:53.817213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-07-24 22:15:53.817378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-07-24 22:15:53.817390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-07-24 22:15:53.817607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-07-24 22:15:53.817619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-07-24 22:15:53.817778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-07-24 22:15:53.817790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-07-24 22:15:53.818030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-07-24 22:15:53.818043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-07-24 22:15:53.818351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-07-24 22:15:53.818363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-07-24 22:15:53.818582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-07-24 22:15:53.818594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-07-24 22:15:53.818835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-07-24 22:15:53.818847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-07-24 22:15:53.818996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-07-24 22:15:53.819007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-07-24 22:15:53.819297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-07-24 22:15:53.819310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-07-24 22:15:53.819637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-07-24 22:15:53.819651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-07-24 22:15:53.819812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-07-24 22:15:53.819824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-07-24 22:15:53.820135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-07-24 22:15:53.820147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-07-24 22:15:53.820295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-07-24 22:15:53.820307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-07-24 22:15:53.820473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-07-24 22:15:53.820485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-07-24 22:15:53.820809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-07-24 22:15:53.820822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-07-24 22:15:53.820926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-07-24 22:15:53.820938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-07-24 22:15:53.821100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-07-24 22:15:53.821112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-07-24 22:15:53.821329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-07-24 22:15:53.821341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-07-24 22:15:53.821558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-07-24 22:15:53.821569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-07-24 22:15:53.821777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-07-24 22:15:53.821789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-07-24 22:15:53.822075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-07-24 22:15:53.822087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.910 [2024-07-24 22:15:53.822236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-07-24 22:15:53.822248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-07-24 22:15:53.822476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-07-24 22:15:53.822488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-07-24 22:15:53.822668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-07-24 22:15:53.822681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-07-24 22:15:53.823004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-07-24 22:15:53.823016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-07-24 22:15:53.823176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-07-24 22:15:53.823188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-07-24 22:15:53.823447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-07-24 22:15:53.823459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-07-24 22:15:53.823768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-07-24 22:15:53.823780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-07-24 22:15:53.823886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-07-24 22:15:53.823898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-07-24 22:15:53.824129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-07-24 22:15:53.824141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-07-24 22:15:53.824298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-07-24 22:15:53.824310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-07-24 22:15:53.824587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-07-24 22:15:53.824599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-07-24 22:15:53.824812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-07-24 22:15:53.824824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-07-24 22:15:53.825045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-07-24 22:15:53.825057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-07-24 22:15:53.825316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-07-24 22:15:53.825328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-07-24 22:15:53.825579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-07-24 22:15:53.825592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-07-24 22:15:53.825898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-07-24 22:15:53.825910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-07-24 22:15:53.826131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-07-24 22:15:53.826143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-07-24 22:15:53.826371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-07-24 22:15:53.826383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-07-24 22:15:53.826529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-07-24 22:15:53.826541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-07-24 22:15:53.826849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-07-24 22:15:53.826861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-07-24 22:15:53.827096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-07-24 22:15:53.827108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-07-24 22:15:53.827343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-07-24 22:15:53.827355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-07-24 22:15:53.827637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-07-24 22:15:53.827649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-07-24 22:15:53.827805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-07-24 22:15:53.827817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-07-24 22:15:53.827981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-07-24 22:15:53.827993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-07-24 22:15:53.828301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-07-24 22:15:53.828312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-07-24 22:15:53.828549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-07-24 22:15:53.828561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-07-24 22:15:53.828778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-07-24 22:15:53.828790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-07-24 22:15:53.829012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-07-24 22:15:53.829025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-07-24 22:15:53.829283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-07-24 22:15:53.829295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-07-24 22:15:53.829581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-07-24 22:15:53.829593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-07-24 22:15:53.829826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-07-24 22:15:53.829838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-07-24 22:15:53.830134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-07-24 22:15:53.830146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-07-24 22:15:53.830380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-07-24 22:15:53.830392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-07-24 22:15:53.830676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-07-24 22:15:53.830688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-07-24 22:15:53.830993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-07-24 22:15:53.831005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-07-24 22:15:53.831226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-07-24 22:15:53.831238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-07-24 22:15:53.831454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-07-24 22:15:53.831466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-07-24 22:15:53.831692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-07-24 22:15:53.831704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-07-24 22:15:53.832027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-07-24 22:15:53.832039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-07-24 22:15:53.832274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-07-24 22:15:53.832286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-07-24 22:15:53.832578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-07-24 22:15:53.832590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-07-24 22:15:53.832706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-07-24 22:15:53.832722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-07-24 22:15:53.832883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-07-24 22:15:53.832894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-07-24 22:15:53.833201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-07-24 22:15:53.833213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-07-24 22:15:53.833538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-07-24 22:15:53.833550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-07-24 22:15:53.833885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-07-24 22:15:53.833897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-07-24 22:15:53.834132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-07-24 22:15:53.834144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-07-24 22:15:53.834380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-07-24 22:15:53.834392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-07-24 22:15:53.834697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-07-24 22:15:53.834708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-07-24 22:15:53.835021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-07-24 22:15:53.835033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-07-24 22:15:53.835251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-07-24 22:15:53.835263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-07-24 22:15:53.835590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-07-24 22:15:53.835602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-07-24 22:15:53.835890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-07-24 22:15:53.835902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-07-24 22:15:53.836067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-07-24 22:15:53.836080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-07-24 22:15:53.836396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-07-24 22:15:53.836408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-07-24 22:15:53.836648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-07-24 22:15:53.836660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-07-24 22:15:53.836966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-07-24 22:15:53.836978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-07-24 22:15:53.837195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-07-24 22:15:53.837207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-07-24 22:15:53.837451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-07-24 22:15:53.837463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-07-24 22:15:53.837698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-07-24 22:15:53.837710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-07-24 22:15:53.838048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-07-24 22:15:53.838060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-07-24 22:15:53.838279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-07-24 22:15:53.838290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-07-24 22:15:53.838522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-07-24 22:15:53.838534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-07-24 22:15:53.838821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-07-24 22:15:53.838833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-07-24 22:15:53.839078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-07-24 22:15:53.839090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-07-24 22:15:53.839395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-07-24 22:15:53.839407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-07-24 22:15:53.839664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-07-24 22:15:53.839676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-07-24 22:15:53.839921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-07-24 22:15:53.839935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-07-24 22:15:53.840164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-07-24 22:15:53.840176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-07-24 22:15:53.840460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-07-24 22:15:53.840472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-07-24 22:15:53.840702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-07-24 22:15:53.840718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-07-24 22:15:53.840955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-07-24 22:15:53.840967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-07-24 22:15:53.841180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-07-24 22:15:53.841191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-07-24 22:15:53.841476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-07-24 22:15:53.841488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-07-24 22:15:53.841774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-07-24 22:15:53.841786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-07-24 22:15:53.842037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-07-24 22:15:53.842049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-07-24 22:15:53.842358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-07-24 22:15:53.842370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-07-24 22:15:53.842700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-07-24 22:15:53.842712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-07-24 22:15:53.843005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-07-24 22:15:53.843017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-07-24 22:15:53.843260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-07-24 22:15:53.843272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-07-24 22:15:53.843485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-07-24 22:15:53.843497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-07-24 22:15:53.843676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-07-24 22:15:53.843687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-07-24 22:15:53.843929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-07-24 22:15:53.843942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-07-24 22:15:53.844229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-07-24 22:15:53.844241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-07-24 22:15:53.844539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-07-24 22:15:53.844551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-07-24 22:15:53.844792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-07-24 22:15:53.844805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-07-24 22:15:53.845088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-07-24 22:15:53.845100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-07-24 22:15:53.845341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-07-24 22:15:53.845353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-07-24 22:15:53.845662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-07-24 22:15:53.845674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-07-24 22:15:53.845962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-07-24 22:15:53.845974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-07-24 22:15:53.846137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-07-24 22:15:53.846149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-07-24 22:15:53.846435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-07-24 22:15:53.846447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-07-24 22:15:53.846533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-07-24 22:15:53.846545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-07-24 22:15:53.846876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-07-24 22:15:53.846888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-07-24 22:15:53.847225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-07-24 22:15:53.847237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-07-24 22:15:53.847530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-07-24 22:15:53.847542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-07-24 22:15:53.847704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-07-24 22:15:53.847725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-07-24 22:15:53.848026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-07-24 22:15:53.848038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-07-24 22:15:53.848340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-07-24 22:15:53.848352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-07-24 22:15:53.848574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-07-24 22:15:53.848586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-07-24 22:15:53.848876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-07-24 22:15:53.848888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-07-24 22:15:53.849192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-07-24 22:15:53.849203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-07-24 22:15:53.849514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-07-24 22:15:53.849525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-07-24 22:15:53.849744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-07-24 22:15:53.849756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-07-24 22:15:53.849984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-07-24 22:15:53.849996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-07-24 22:15:53.850172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-07-24 22:15:53.850184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-07-24 22:15:53.850360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-07-24 22:15:53.850372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-07-24 22:15:53.850685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-07-24 22:15:53.850699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-07-24 22:15:53.850998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-07-24 22:15:53.851010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-07-24 22:15:53.851319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-07-24 22:15:53.851332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-07-24 22:15:53.851577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-07-24 22:15:53.851589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-07-24 22:15:53.851804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-07-24 22:15:53.851816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-07-24 22:15:53.852074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-07-24 22:15:53.852086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-07-24 22:15:53.852310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-07-24 22:15:53.852322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-07-24 22:15:53.852652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-07-24 22:15:53.852664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-07-24 22:15:53.852951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-07-24 22:15:53.852963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-07-24 22:15:53.853258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-07-24 22:15:53.853270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-07-24 22:15:53.853576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-07-24 22:15:53.853588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-07-24 22:15:53.853827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-07-24 22:15:53.853840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-07-24 22:15:53.854093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-07-24 22:15:53.854105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-07-24 22:15:53.854413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-07-24 22:15:53.854425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-07-24 22:15:53.854647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-07-24 22:15:53.854660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-07-24 22:15:53.854897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-07-24 22:15:53.854909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-07-24 22:15:53.855141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-07-24 22:15:53.855153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-07-24 22:15:53.855326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-07-24 22:15:53.855338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-07-24 22:15:53.855642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-07-24 22:15:53.855653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-07-24 22:15:53.855823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-07-24 22:15:53.855835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-07-24 22:15:53.855997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-07-24 22:15:53.856009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-07-24 22:15:53.856320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-07-24 22:15:53.856331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-07-24 22:15:53.856630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-07-24 22:15:53.856642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-07-24 22:15:53.856893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-07-24 22:15:53.856905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-07-24 22:15:53.857238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-07-24 22:15:53.857250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-07-24 22:15:53.857469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-07-24 22:15:53.857480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-07-24 22:15:53.857840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-07-24 22:15:53.857853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-07-24 22:15:53.858114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-07-24 22:15:53.858126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-07-24 22:15:53.858390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-07-24 22:15:53.858401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-07-24 22:15:53.858506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-07-24 22:15:53.858518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-07-24 22:15:53.858775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-07-24 22:15:53.858788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-07-24 22:15:53.859026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-07-24 22:15:53.859038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-07-24 22:15:53.859208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-07-24 22:15:53.859220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-07-24 22:15:53.859507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-07-24 22:15:53.859519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-07-24 22:15:53.859737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-07-24 22:15:53.859749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-07-24 22:15:53.860040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-07-24 22:15:53.860052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-07-24 22:15:53.860215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-07-24 22:15:53.860226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-07-24 22:15:53.860459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-07-24 22:15:53.860471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-07-24 22:15:53.860699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-07-24 22:15:53.860711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-07-24 22:15:53.860899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-07-24 22:15:53.860911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-07-24 22:15:53.861165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-07-24 22:15:53.861179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-07-24 22:15:53.861464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-07-24 22:15:53.861476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-07-24 22:15:53.861718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-07-24 22:15:53.861730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-07-24 22:15:53.861884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-07-24 22:15:53.861896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-07-24 22:15:53.862150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-07-24 22:15:53.862163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-07-24 22:15:53.862311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-07-24 22:15:53.862322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-07-24 22:15:53.862536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-07-24 22:15:53.862548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-07-24 22:15:53.862856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-07-24 22:15:53.862868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-07-24 22:15:53.863110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-07-24 22:15:53.863122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-07-24 22:15:53.863277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-07-24 22:15:53.863289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-07-24 22:15:53.863542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-07-24 22:15:53.863554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-07-24 22:15:53.863770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-07-24 22:15:53.863782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-07-24 22:15:53.864017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-07-24 22:15:53.864029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-07-24 22:15:53.864264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-07-24 22:15:53.864275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-07-24 22:15:53.864525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-07-24 22:15:53.864537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-07-24 22:15:53.864765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-07-24 22:15:53.864777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-07-24 22:15:53.865039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-07-24 22:15:53.865050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-07-24 22:15:53.865334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-07-24 22:15:53.865346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-07-24 22:15:53.865572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-07-24 22:15:53.865584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-07-24 22:15:53.865837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-07-24 22:15:53.865850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-07-24 22:15:53.866122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-07-24 22:15:53.866134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-07-24 22:15:53.866386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-07-24 22:15:53.866398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-07-24 22:15:53.866632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-07-24 22:15:53.866644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-07-24 22:15:53.866867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-07-24 22:15:53.866880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-07-24 22:15:53.867100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-07-24 22:15:53.867112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-07-24 22:15:53.867329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-07-24 22:15:53.867341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-07-24 22:15:53.867439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-07-24 22:15:53.867451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-07-24 22:15:53.867609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-07-24 22:15:53.867620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-07-24 22:15:53.867858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-07-24 22:15:53.867870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.915 [2024-07-24 22:15:53.868019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-07-24 22:15:53.868031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-07-24 22:15:53.868337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-07-24 22:15:53.868349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-07-24 22:15:53.868514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-07-24 22:15:53.868526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-07-24 22:15:53.868775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-07-24 22:15:53.868787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-07-24 22:15:53.869005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-07-24 22:15:53.869017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-07-24 22:15:53.869273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-07-24 22:15:53.869285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-07-24 22:15:53.869568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-07-24 22:15:53.869579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-07-24 22:15:53.869864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-07-24 22:15:53.869876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-07-24 22:15:53.870189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-07-24 22:15:53.870201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-07-24 22:15:53.870363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-07-24 22:15:53.870375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-07-24 22:15:53.870605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-07-24 22:15:53.870617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-07-24 22:15:53.870780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-07-24 22:15:53.870794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-07-24 22:15:53.871017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-07-24 22:15:53.871029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-07-24 22:15:53.871311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-07-24 22:15:53.871323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-07-24 22:15:53.871487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-07-24 22:15:53.871498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-07-24 22:15:53.871678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-07-24 22:15:53.871690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-07-24 22:15:53.872008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-07-24 22:15:53.872020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-07-24 22:15:53.872264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-07-24 22:15:53.872276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-07-24 22:15:53.872494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-07-24 22:15:53.872506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-07-24 22:15:53.872722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-07-24 22:15:53.872734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-07-24 22:15:53.873034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-07-24 22:15:53.873046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-07-24 22:15:53.873195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-07-24 22:15:53.873207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-07-24 22:15:53.873447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-07-24 22:15:53.873459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-07-24 22:15:53.873764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-07-24 22:15:53.873776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-07-24 22:15:53.874081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-07-24 22:15:53.874093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-07-24 22:15:53.874325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-07-24 22:15:53.874337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-07-24 22:15:53.874573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-07-24 22:15:53.874585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-07-24 22:15:53.874868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-07-24 22:15:53.874881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-07-24 22:15:53.875112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-07-24 22:15:53.875124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-07-24 22:15:53.875344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-07-24 22:15:53.875356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-07-24 22:15:53.875672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-07-24 22:15:53.875683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-07-24 22:15:53.875923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-07-24 22:15:53.875935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-07-24 22:15:53.876245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-07-24 22:15:53.876257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-07-24 22:15:53.876599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-07-24 22:15:53.876611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-07-24 22:15:53.876787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-07-24 22:15:53.876799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-07-24 22:15:53.877083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-07-24 22:15:53.877095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-07-24 22:15:53.877262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-07-24 22:15:53.877274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-07-24 22:15:53.877429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-07-24 22:15:53.877440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-07-24 22:15:53.877676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-07-24 22:15:53.877688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-07-24 22:15:53.877878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-07-24 22:15:53.877891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-07-24 22:15:53.878153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-07-24 22:15:53.878165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-07-24 22:15:53.878404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-07-24 22:15:53.878416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-07-24 22:15:53.878579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-07-24 22:15:53.878591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-07-24 22:15:53.878762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-07-24 22:15:53.878774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-07-24 22:15:53.879065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-07-24 22:15:53.879077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-07-24 22:15:53.879309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-07-24 22:15:53.879321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-07-24 22:15:53.879558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-07-24 22:15:53.879570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-07-24 22:15:53.879683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-07-24 22:15:53.879695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-07-24 22:15:53.879933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-07-24 22:15:53.879945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-07-24 22:15:53.880205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-07-24 22:15:53.880217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-07-24 22:15:53.880375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-07-24 22:15:53.880387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-07-24 22:15:53.880604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-07-24 22:15:53.880618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-07-24 22:15:53.880904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-07-24 22:15:53.880916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-07-24 22:15:53.881161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-07-24 22:15:53.881173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-07-24 22:15:53.881426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-07-24 22:15:53.881437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-07-24 22:15:53.881684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-07-24 22:15:53.881696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-07-24 22:15:53.881984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-07-24 22:15:53.881996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-07-24 22:15:53.882212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-07-24 22:15:53.882225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-07-24 22:15:53.882523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-07-24 22:15:53.882535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-07-24 22:15:53.882844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-07-24 22:15:53.882856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-07-24 22:15:53.883032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-07-24 22:15:53.883044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-07-24 22:15:53.883273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-07-24 22:15:53.883284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-07-24 22:15:53.883514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-07-24 22:15:53.883526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-07-24 22:15:53.883755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-07-24 22:15:53.883767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-07-24 22:15:53.884025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-07-24 22:15:53.884036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-07-24 22:15:53.884329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-07-24 22:15:53.884341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-07-24 22:15:53.884607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-07-24 22:15:53.884619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-07-24 22:15:53.884796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-07-24 22:15:53.884808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-07-24 22:15:53.885117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-07-24 22:15:53.885129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-07-24 22:15:53.885425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-07-24 22:15:53.885437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-07-24 22:15:53.885721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-07-24 22:15:53.885733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-07-24 22:15:53.885928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-07-24 22:15:53.885941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-07-24 22:15:53.886169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-07-24 22:15:53.886181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-07-24 22:15:53.886345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-07-24 22:15:53.886357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-07-24 22:15:53.886584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-07-24 22:15:53.886596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-07-24 22:15:53.886812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-07-24 22:15:53.886824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-07-24 22:15:53.887059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-07-24 22:15:53.887070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-07-24 22:15:53.887310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-07-24 22:15:53.887322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-07-24 22:15:53.887557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-07-24 22:15:53.887569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-07-24 22:15:53.887748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-07-24 22:15:53.887760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-07-24 22:15:53.887938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-07-24 22:15:53.887951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-07-24 22:15:53.888139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-07-24 22:15:53.888150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-07-24 22:15:53.888365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-07-24 22:15:53.888376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-07-24 22:15:53.888689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-07-24 22:15:53.888701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-07-24 22:15:53.888988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-07-24 22:15:53.888999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-07-24 22:15:53.889239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-07-24 22:15:53.889251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-07-24 22:15:53.889543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-07-24 22:15:53.889555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-07-24 22:15:53.889771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-07-24 22:15:53.889783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-07-24 22:15:53.889930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-07-24 22:15:53.889942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-07-24 22:15:53.890178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-07-24 22:15:53.890190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-07-24 22:15:53.890494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-07-24 22:15:53.890506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-07-24 22:15:53.890731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-07-24 22:15:53.890745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-07-24 22:15:53.890966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-07-24 22:15:53.890978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-07-24 22:15:53.891139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-07-24 22:15:53.891151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-07-24 22:15:53.891458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-07-24 22:15:53.891470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-07-24 22:15:53.891617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-07-24 22:15:53.891629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-07-24 22:15:53.891811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-07-24 22:15:53.891823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-07-24 22:15:53.891981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-07-24 22:15:53.891993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-07-24 22:15:53.892152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-07-24 22:15:53.892164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-07-24 22:15:53.892380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-07-24 22:15:53.892392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-07-24 22:15:53.892552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-07-24 22:15:53.892565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-07-24 22:15:53.892850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-07-24 22:15:53.892862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-07-24 22:15:53.893196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-07-24 22:15:53.893207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-07-24 22:15:53.893521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-07-24 22:15:53.893533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-07-24 22:15:53.893769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-07-24 22:15:53.893781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-07-24 22:15:53.894077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-07-24 22:15:53.894089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-07-24 22:15:53.894372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-07-24 22:15:53.894384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-07-24 22:15:53.894692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-07-24 22:15:53.894704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-07-24 22:15:53.894954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-07-24 22:15:53.894966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-07-24 22:15:53.895147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-07-24 22:15:53.895159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-07-24 22:15:53.895392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-07-24 22:15:53.895404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-07-24 22:15:53.895555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-07-24 22:15:53.895567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-07-24 22:15:53.895867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-07-24 22:15:53.895879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-07-24 22:15:53.896044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-07-24 22:15:53.896056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-07-24 22:15:53.896274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-07-24 22:15:53.896286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-07-24 22:15:53.896519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-07-24 22:15:53.896531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-07-24 22:15:53.896846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-07-24 22:15:53.896858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-07-24 22:15:53.897083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-07-24 22:15:53.897095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-07-24 22:15:53.897327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-07-24 22:15:53.897339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-07-24 22:15:53.897643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-07-24 22:15:53.897655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-07-24 22:15:53.897889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-07-24 22:15:53.897901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-07-24 22:15:53.898184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-07-24 22:15:53.898196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-07-24 22:15:53.898435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-07-24 22:15:53.898447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-07-24 22:15:53.898666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-07-24 22:15:53.898678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-07-24 22:15:53.899005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-07-24 22:15:53.899017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-07-24 22:15:53.899259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-07-24 22:15:53.899271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-07-24 22:15:53.899577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-07-24 22:15:53.899589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-07-24 22:15:53.899895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-07-24 22:15:53.899908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-07-24 22:15:53.900167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-07-24 22:15:53.900179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-07-24 22:15:53.900410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-07-24 22:15:53.900422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-07-24 22:15:53.900666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-07-24 22:15:53.900678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-07-24 22:15:53.900916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-07-24 22:15:53.900929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-07-24 22:15:53.901092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-07-24 22:15:53.901104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-07-24 22:15:53.901393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-07-24 22:15:53.901405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-07-24 22:15:53.901570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-07-24 22:15:53.901581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-07-24 22:15:53.901842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-07-24 22:15:53.901854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-07-24 22:15:53.902088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-07-24 22:15:53.902100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-07-24 22:15:53.902344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-07-24 22:15:53.902356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-07-24 22:15:53.902611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-07-24 22:15:53.902623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-07-24 22:15:53.902785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-07-24 22:15:53.902797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-07-24 22:15:53.903086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-07-24 22:15:53.903098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-07-24 22:15:53.903266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-07-24 22:15:53.903278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-07-24 22:15:53.903430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-07-24 22:15:53.903442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.919 [2024-07-24 22:15:53.903670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-07-24 22:15:53.903682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-07-24 22:15:53.903911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-07-24 22:15:53.903924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-07-24 22:15:53.904076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-07-24 22:15:53.904088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-07-24 22:15:53.904310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-07-24 22:15:53.904322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-07-24 22:15:53.904539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-07-24 22:15:53.904551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-07-24 22:15:53.904710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-07-24 22:15:53.904725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-07-24 22:15:53.905011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-07-24 22:15:53.905023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-07-24 22:15:53.905257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-07-24 22:15:53.905269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-07-24 22:15:53.905552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-07-24 22:15:53.905564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-07-24 22:15:53.905795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-07-24 22:15:53.905808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-07-24 22:15:53.906115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-07-24 22:15:53.906126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-07-24 22:15:53.906354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-07-24 22:15:53.906366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-07-24 22:15:53.906547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-07-24 22:15:53.906559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-07-24 22:15:53.906804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-07-24 22:15:53.906816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-07-24 22:15:53.907037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-07-24 22:15:53.907049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-07-24 22:15:53.907201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-07-24 22:15:53.907215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-07-24 22:15:53.907378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-07-24 22:15:53.907390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-07-24 22:15:53.907613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-07-24 22:15:53.907625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-07-24 22:15:53.907851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-07-24 22:15:53.907864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-07-24 22:15:53.908105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-07-24 22:15:53.908117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-07-24 22:15:53.908353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-07-24 22:15:53.908365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-07-24 22:15:53.908607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-07-24 22:15:53.908619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-07-24 22:15:53.908875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-07-24 22:15:53.908887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-07-24 22:15:53.909210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-07-24 22:15:53.909222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-07-24 22:15:53.909412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-07-24 22:15:53.909424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-07-24 22:15:53.909733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-07-24 22:15:53.909746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-07-24 22:15:53.910054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-07-24 22:15:53.910067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-07-24 22:15:53.910221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-07-24 22:15:53.910233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-07-24 22:15:53.910495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-07-24 22:15:53.910507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-07-24 22:15:53.910754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-07-24 22:15:53.910767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-07-24 22:15:53.911020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-07-24 22:15:53.911032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-07-24 22:15:53.911340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-07-24 22:15:53.911352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-07-24 22:15:53.911585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-07-24 22:15:53.911597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-07-24 22:15:53.911883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-07-24 22:15:53.911895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-07-24 22:15:53.912122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-07-24 22:15:53.912134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-07-24 22:15:53.912441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-07-24 22:15:53.912452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-07-24 22:15:53.912670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-07-24 22:15:53.912682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-07-24 22:15:53.912908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-07-24 22:15:53.912921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-07-24 22:15:53.913228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-07-24 22:15:53.913240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-07-24 22:15:53.913547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-07-24 22:15:53.913559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-07-24 22:15:53.913747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-07-24 22:15:53.913759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-07-24 22:15:53.913932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-07-24 22:15:53.913944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-07-24 22:15:53.914199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-07-24 22:15:53.914211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-07-24 22:15:53.914520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-07-24 22:15:53.914532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-07-24 22:15:53.914772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-07-24 22:15:53.914784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-07-24 22:15:53.915092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-07-24 22:15:53.915104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-07-24 22:15:53.915332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-07-24 22:15:53.915344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-07-24 22:15:53.915513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-07-24 22:15:53.915525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-07-24 22:15:53.915851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-07-24 22:15:53.915864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-07-24 22:15:53.916084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-07-24 22:15:53.916096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-07-24 22:15:53.916281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-07-24 22:15:53.916293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-07-24 22:15:53.916463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-07-24 22:15:53.916475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-07-24 22:15:53.916701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-07-24 22:15:53.916713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-07-24 22:15:53.917041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-07-24 22:15:53.917053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-07-24 22:15:53.917227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-07-24 22:15:53.917240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-07-24 22:15:53.917474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-07-24 22:15:53.917488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-07-24 22:15:53.917741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-07-24 22:15:53.917753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-07-24 22:15:53.917916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-07-24 22:15:53.917929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-07-24 22:15:53.918150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-07-24 22:15:53.918161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-07-24 22:15:53.918321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-07-24 22:15:53.918333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-07-24 22:15:53.918497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-07-24 22:15:53.918508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-07-24 22:15:53.918670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-07-24 22:15:53.918682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-07-24 22:15:53.918972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-07-24 22:15:53.918984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-07-24 22:15:53.919297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-07-24 22:15:53.919309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-07-24 22:15:53.919540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-07-24 22:15:53.919552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-07-24 22:15:53.919837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-07-24 22:15:53.919849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-07-24 22:15:53.920090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-07-24 22:15:53.920102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-07-24 22:15:53.920404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-07-24 22:15:53.920416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-07-24 22:15:53.920584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-07-24 22:15:53.920596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-07-24 22:15:53.920840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-07-24 22:15:53.920852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-07-24 22:15:53.921101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-07-24 22:15:53.921112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-07-24 22:15:53.921446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-07-24 22:15:53.921458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-07-24 22:15:53.921650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-07-24 22:15:53.921662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-07-24 22:15:53.921757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-07-24 22:15:53.921769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-07-24 22:15:53.921931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-07-24 22:15:53.921943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.921 [2024-07-24 22:15:53.922194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-07-24 22:15:53.922206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-07-24 22:15:53.922491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-07-24 22:15:53.922503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-07-24 22:15:53.922658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-07-24 22:15:53.922670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-07-24 22:15:53.922902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-07-24 22:15:53.922915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-07-24 22:15:53.923148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-07-24 22:15:53.923160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-07-24 22:15:53.923396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-07-24 22:15:53.923408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-07-24 22:15:53.923652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-07-24 22:15:53.923664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-07-24 22:15:53.923920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-07-24 22:15:53.923932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-07-24 22:15:53.924216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-07-24 22:15:53.924228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-07-24 22:15:53.924480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-07-24 22:15:53.924492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-07-24 22:15:53.924751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-07-24 22:15:53.924763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-07-24 22:15:53.924929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-07-24 22:15:53.924941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-07-24 22:15:53.925158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-07-24 22:15:53.925170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-07-24 22:15:53.925349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-07-24 22:15:53.925361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-07-24 22:15:53.925575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-07-24 22:15:53.925587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-07-24 22:15:53.925824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-07-24 22:15:53.925837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-07-24 22:15:53.926074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-07-24 22:15:53.926086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-07-24 22:15:53.926431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-07-24 22:15:53.926443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-07-24 22:15:53.926665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-07-24 22:15:53.926677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-07-24 22:15:53.926905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-07-24 22:15:53.926917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-07-24 22:15:53.927223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-07-24 22:15:53.927237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-07-24 22:15:53.927418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-07-24 22:15:53.927430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-07-24 22:15:53.927595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-07-24 22:15:53.927607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-07-24 22:15:53.927893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-07-24 22:15:53.927905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-07-24 22:15:53.928144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-07-24 22:15:53.928156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-07-24 22:15:53.928378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-07-24 22:15:53.928390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-07-24 22:15:53.928540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-07-24 22:15:53.928552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-07-24 22:15:53.928705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-07-24 22:15:53.928721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-07-24 22:15:53.929030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-07-24 22:15:53.929042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-07-24 22:15:53.929189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-07-24 22:15:53.929201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-07-24 22:15:53.929510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-07-24 22:15:53.929522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-07-24 22:15:53.929758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-07-24 22:15:53.929770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-07-24 22:15:53.929924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-07-24 22:15:53.929936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-07-24 22:15:53.930174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-07-24 22:15:53.930186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-07-24 22:15:53.930420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-07-24 22:15:53.930432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-07-24 22:15:53.930649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-07-24 22:15:53.930661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-07-24 22:15:53.930843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-07-24 22:15:53.930855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-07-24 22:15:53.931145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-07-24 22:15:53.931157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-07-24 22:15:53.931351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-07-24 22:15:53.931363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-07-24 22:15:53.931522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-07-24 22:15:53.931534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-07-24 22:15:53.931817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-07-24 22:15:53.931830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-07-24 22:15:53.931982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-07-24 22:15:53.931994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-07-24 22:15:53.932305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-07-24 22:15:53.932317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-07-24 22:15:53.932506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-07-24 22:15:53.932518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-07-24 22:15:53.932811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-07-24 22:15:53.932823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-07-24 22:15:53.933044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-07-24 22:15:53.933056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-07-24 22:15:53.933274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-07-24 22:15:53.933286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-07-24 22:15:53.933569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-07-24 22:15:53.933581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-07-24 22:15:53.933889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-07-24 22:15:53.933901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-07-24 22:15:53.934185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-07-24 22:15:53.934197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-07-24 22:15:53.934419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-07-24 22:15:53.934430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-07-24 22:15:53.934786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-07-24 22:15:53.934798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-07-24 22:15:53.935039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-07-24 22:15:53.935051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-07-24 22:15:53.935289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-07-24 22:15:53.935301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-07-24 22:15:53.935534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-07-24 22:15:53.935545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-07-24 22:15:53.935694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-07-24 22:15:53.935706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-07-24 22:15:53.936012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-07-24 22:15:53.936024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-07-24 22:15:53.936247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-07-24 22:15:53.936260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-07-24 22:15:53.936428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-07-24 22:15:53.936440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-07-24 22:15:53.936656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-07-24 22:15:53.936668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-07-24 22:15:53.936846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-07-24 22:15:53.936861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-07-24 22:15:53.937088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-07-24 22:15:53.937100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-07-24 22:15:53.937263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-07-24 22:15:53.937275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-07-24 22:15:53.937499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-07-24 22:15:53.937511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-07-24 22:15:53.937735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-07-24 22:15:53.937747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-07-24 22:15:53.937971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-07-24 22:15:53.937983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-07-24 22:15:53.938290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-07-24 22:15:53.938302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-07-24 22:15:53.938459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-07-24 22:15:53.938471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-07-24 22:15:53.938635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-07-24 22:15:53.938647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-07-24 22:15:53.938872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-07-24 22:15:53.938884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-07-24 22:15:53.939114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-07-24 22:15:53.939126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-07-24 22:15:53.939341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-07-24 22:15:53.939353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-07-24 22:15:53.939636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-07-24 22:15:53.939648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-07-24 22:15:53.939831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-07-24 22:15:53.939843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-07-24 22:15:53.939933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-07-24 22:15:53.939945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-07-24 22:15:53.940253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-07-24 22:15:53.940265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-07-24 22:15:53.940440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-07-24 22:15:53.940452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-07-24 22:15:53.940696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-07-24 22:15:53.940708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-07-24 22:15:53.941034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-07-24 22:15:53.941046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-07-24 22:15:53.941356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-07-24 22:15:53.941368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-07-24 22:15:53.941582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-07-24 22:15:53.941594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-07-24 22:15:53.941762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-07-24 22:15:53.941775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-07-24 22:15:53.941995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-07-24 22:15:53.942007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-07-24 22:15:53.942278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-07-24 22:15:53.942290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-07-24 22:15:53.942525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-07-24 22:15:53.942537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-07-24 22:15:53.942687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-07-24 22:15:53.942700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-07-24 22:15:53.942876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-07-24 22:15:53.942888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-07-24 22:15:53.943178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-07-24 22:15:53.943190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-07-24 22:15:53.943426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-07-24 22:15:53.943437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-07-24 22:15:53.943544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-07-24 22:15:53.943555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-07-24 22:15:53.943710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-07-24 22:15:53.943727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-07-24 22:15:53.944012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-07-24 22:15:53.944024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-07-24 22:15:53.944244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-07-24 22:15:53.944255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-07-24 22:15:53.944567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-07-24 22:15:53.944579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-07-24 22:15:53.944805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-07-24 22:15:53.944817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-07-24 22:15:53.944981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-07-24 22:15:53.944993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-07-24 22:15:53.945226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-07-24 22:15:53.945238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-07-24 22:15:53.945470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-07-24 22:15:53.945482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-07-24 22:15:53.945710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-07-24 22:15:53.945727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-07-24 22:15:53.945953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-07-24 22:15:53.945966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-07-24 22:15:53.946213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-07-24 22:15:53.946227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-07-24 22:15:53.946474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-07-24 22:15:53.946486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-07-24 22:15:53.946725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-07-24 22:15:53.946737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-07-24 22:15:53.947043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-07-24 22:15:53.947055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-07-24 22:15:53.947339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-07-24 22:15:53.947351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-07-24 22:15:53.947584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-07-24 22:15:53.947597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-07-24 22:15:53.947780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-07-24 22:15:53.947792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-07-24 22:15:53.947941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-07-24 22:15:53.947953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-07-24 22:15:53.948185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-07-24 22:15:53.948197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-07-24 22:15:53.948447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-07-24 22:15:53.948460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-07-24 22:15:53.948766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-07-24 22:15:53.948778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-07-24 22:15:53.949110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-07-24 22:15:53.949122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-07-24 22:15:53.949354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-07-24 22:15:53.949366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-07-24 22:15:53.949673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-07-24 22:15:53.949686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-07-24 22:15:53.949917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-07-24 22:15:53.949929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-07-24 22:15:53.950163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-07-24 22:15:53.950176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-07-24 22:15:53.950401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-07-24 22:15:53.950413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-07-24 22:15:53.950704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-07-24 22:15:53.950723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-07-24 22:15:53.951007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-07-24 22:15:53.951019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-07-24 22:15:53.951153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-07-24 22:15:53.951165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-07-24 22:15:53.951395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-07-24 22:15:53.951408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-07-24 22:15:53.951575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-07-24 22:15:53.951588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-07-24 22:15:53.951762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-07-24 22:15:53.951775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-07-24 22:15:53.952079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-07-24 22:15:53.952091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-07-24 22:15:53.952207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-07-24 22:15:53.952220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-07-24 22:15:53.952449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-07-24 22:15:53.952462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-07-24 22:15:53.952796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-07-24 22:15:53.952810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-07-24 22:15:53.953045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-07-24 22:15:53.953058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-07-24 22:15:53.953287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-07-24 22:15:53.953299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-07-24 22:15:53.953466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-07-24 22:15:53.953479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-07-24 22:15:53.953637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-07-24 22:15:53.953649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-07-24 22:15:53.953879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-07-24 22:15:53.953891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-07-24 22:15:53.954127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-07-24 22:15:53.954140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-07-24 22:15:53.954450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-07-24 22:15:53.954462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-07-24 22:15:53.954684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-07-24 22:15:53.954696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-07-24 22:15:53.954962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-07-24 22:15:53.954975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-07-24 22:15:53.955197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-07-24 22:15:53.955210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-07-24 22:15:53.955382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-07-24 22:15:53.955394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-07-24 22:15:53.955632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-07-24 22:15:53.955644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-07-24 22:15:53.955810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-07-24 22:15:53.955822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-07-24 22:15:53.956122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-07-24 22:15:53.956136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-07-24 22:15:53.956466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-07-24 22:15:53.956478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-07-24 22:15:53.956654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-07-24 22:15:53.956667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-07-24 22:15:53.956951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-07-24 22:15:53.956964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-07-24 22:15:53.957257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-07-24 22:15:53.957269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-07-24 22:15:53.957431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-07-24 22:15:53.957443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-07-24 22:15:53.957708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-07-24 22:15:53.957724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-07-24 22:15:53.957961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-07-24 22:15:53.957973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-07-24 22:15:53.958286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-07-24 22:15:53.958298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-07-24 22:15:53.958520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-07-24 22:15:53.958532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-07-24 22:15:53.958713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-07-24 22:15:53.958730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-07-24 22:15:53.958974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-07-24 22:15:53.958986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-07-24 22:15:53.959236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-07-24 22:15:53.959248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-07-24 22:15:53.959501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-07-24 22:15:53.959513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-07-24 22:15:53.959756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-07-24 22:15:53.959768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-07-24 22:15:53.959956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-07-24 22:15:53.959968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-07-24 22:15:53.960145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-07-24 22:15:53.960157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-07-24 22:15:53.960386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-07-24 22:15:53.960398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-07-24 22:15:53.960645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-07-24 22:15:53.960657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-07-24 22:15:53.960853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-07-24 22:15:53.960865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-07-24 22:15:53.961030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-07-24 22:15:53.961042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-07-24 22:15:53.961263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-07-24 22:15:53.961275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-07-24 22:15:53.961590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-07-24 22:15:53.961602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-07-24 22:15:53.961826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-07-24 22:15:53.961841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-07-24 22:15:53.961997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-07-24 22:15:53.962009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-07-24 22:15:53.962273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-07-24 22:15:53.962285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-07-24 22:15:53.962452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-07-24 22:15:53.962466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-07-24 22:15:53.962622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-07-24 22:15:53.962635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-07-24 22:15:53.962822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-07-24 22:15:53.962835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-07-24 22:15:53.963071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-07-24 22:15:53.963084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-07-24 22:15:53.963317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-07-24 22:15:53.963329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-07-24 22:15:53.963455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-07-24 22:15:53.963467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-07-24 22:15:53.963637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-07-24 22:15:53.963648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-07-24 22:15:53.963960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-07-24 22:15:53.963973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-07-24 22:15:53.964136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-07-24 22:15:53.964148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-07-24 22:15:53.964428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-07-24 22:15:53.964440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-07-24 22:15:53.964597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-07-24 22:15:53.964609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-07-24 22:15:53.964900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-07-24 22:15:53.964913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-07-24 22:15:53.965072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-07-24 22:15:53.965085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-07-24 22:15:53.965249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-07-24 22:15:53.965261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.926 [2024-07-24 22:15:53.965405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-07-24 22:15:53.965419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-07-24 22:15:53.965646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-07-24 22:15:53.965658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-07-24 22:15:53.965844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-07-24 22:15:53.965856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-07-24 22:15:53.966001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-07-24 22:15:53.966013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-07-24 22:15:53.966176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-07-24 22:15:53.966188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-07-24 22:15:53.966353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-07-24 22:15:53.966365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-07-24 22:15:53.966506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-07-24 22:15:53.966518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-07-24 22:15:53.966757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-07-24 22:15:53.966769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-07-24 22:15:53.966958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-07-24 22:15:53.966971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-07-24 22:15:53.967204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-07-24 22:15:53.967216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-07-24 22:15:53.967512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-07-24 22:15:53.967524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-07-24 22:15:53.967754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-07-24 22:15:53.967767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-07-24 22:15:53.967919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-07-24 22:15:53.967931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-07-24 22:15:53.968095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-07-24 22:15:53.968107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-07-24 22:15:53.968459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-07-24 22:15:53.968471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-07-24 22:15:53.968652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-07-24 22:15:53.968664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-07-24 22:15:53.968899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-07-24 22:15:53.968911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-07-24 22:15:53.969140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-07-24 22:15:53.969153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-07-24 22:15:53.969330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-07-24 22:15:53.969342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-07-24 22:15:53.969548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-07-24 22:15:53.969560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-07-24 22:15:53.969739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-07-24 22:15:53.969752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-07-24 22:15:53.969910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-07-24 22:15:53.969922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-07-24 22:15:53.970217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-07-24 22:15:53.970230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-07-24 22:15:53.970474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-07-24 22:15:53.970487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-07-24 22:15:53.970653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-07-24 22:15:53.970665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-07-24 22:15:53.970963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-07-24 22:15:53.970976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-07-24 22:15:53.971152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-07-24 22:15:53.971164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-07-24 22:15:53.971386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-07-24 22:15:53.971398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-07-24 22:15:53.971688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-07-24 22:15:53.971701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-07-24 22:15:53.971822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-07-24 22:15:53.971834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-07-24 22:15:53.971995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-07-24 22:15:53.972008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-07-24 22:15:53.972232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-07-24 22:15:53.972244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-07-24 22:15:53.972477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-07-24 22:15:53.972489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-07-24 22:15:53.972711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-07-24 22:15:53.972729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-07-24 22:15:53.972962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-07-24 22:15:53.972974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-07-24 22:15:53.973210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-07-24 22:15:53.973222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-07-24 22:15:53.973508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-07-24 22:15:53.973520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-07-24 22:15:53.973697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-07-24 22:15:53.973709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-07-24 22:15:53.973935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-07-24 22:15:53.973947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-07-24 22:15:53.974185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-07-24 22:15:53.974198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-07-24 22:15:53.974484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-07-24 22:15:53.974498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-07-24 22:15:53.974723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-07-24 22:15:53.974735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-07-24 22:15:53.974964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-07-24 22:15:53.974976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-07-24 22:15:53.975214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-07-24 22:15:53.975226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-07-24 22:15:53.975330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-07-24 22:15:53.975342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-07-24 22:15:53.975560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-07-24 22:15:53.975573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-07-24 22:15:53.975799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-07-24 22:15:53.975812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-07-24 22:15:53.975962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-07-24 22:15:53.975974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-07-24 22:15:53.976144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-07-24 22:15:53.976156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-07-24 22:15:53.976370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-07-24 22:15:53.976382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-07-24 22:15:53.976552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-07-24 22:15:53.976564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-07-24 22:15:53.976823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-07-24 22:15:53.976835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-07-24 22:15:53.977002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-07-24 22:15:53.977014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-07-24 22:15:53.977254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-07-24 22:15:53.977267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-07-24 22:15:53.977451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-07-24 22:15:53.977463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-07-24 22:15:53.977692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-07-24 22:15:53.977704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-07-24 22:15:53.977966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-07-24 22:15:53.977979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-07-24 22:15:53.978219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-07-24 22:15:53.978231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-07-24 22:15:53.978448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-07-24 22:15:53.978460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-07-24 22:15:53.978637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-07-24 22:15:53.978649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-07-24 22:15:53.978798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-07-24 22:15:53.978810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-07-24 22:15:53.979050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-07-24 22:15:53.979062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-07-24 22:15:53.979377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-07-24 22:15:53.979389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-07-24 22:15:53.979574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-07-24 22:15:53.979586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-07-24 22:15:53.979836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-07-24 22:15:53.979849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-07-24 22:15:53.979983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-07-24 22:15:53.979995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-07-24 22:15:53.980309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-07-24 22:15:53.980321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-07-24 22:15:53.980546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-07-24 22:15:53.980558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-07-24 22:15:53.980775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-07-24 22:15:53.980788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-07-24 22:15:53.980947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-07-24 22:15:53.980960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-07-24 22:15:53.981198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-07-24 22:15:53.981210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-07-24 22:15:53.981370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-07-24 22:15:53.981382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-07-24 22:15:53.981672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-07-24 22:15:53.981684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-07-24 22:15:53.981901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-07-24 22:15:53.981914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-07-24 22:15:53.982200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-07-24 22:15:53.982212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-07-24 22:15:53.982394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-07-24 22:15:53.982406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-07-24 22:15:53.982690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-07-24 22:15:53.982702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-07-24 22:15:53.982931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-07-24 22:15:53.982943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-07-24 22:15:53.983123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-07-24 22:15:53.983135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-07-24 22:15:53.983356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-07-24 22:15:53.983368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-07-24 22:15:53.983598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-07-24 22:15:53.983611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-07-24 22:15:53.983850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-07-24 22:15:53.983862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-07-24 22:15:53.984145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-07-24 22:15:53.984157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-07-24 22:15:53.984374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-07-24 22:15:53.984386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-07-24 22:15:53.984606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-07-24 22:15:53.984618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-07-24 22:15:53.984769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-07-24 22:15:53.984782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-07-24 22:15:53.984952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-07-24 22:15:53.984964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-07-24 22:15:53.985253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-07-24 22:15:53.985265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-07-24 22:15:53.985494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-07-24 22:15:53.985505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-07-24 22:15:53.985676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-07-24 22:15:53.985688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-07-24 22:15:53.985888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-07-24 22:15:53.985902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-07-24 22:15:53.986040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-07-24 22:15:53.986053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-07-24 22:15:53.986170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-07-24 22:15:53.986182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-07-24 22:15:53.986269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-07-24 22:15:53.986281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-07-24 22:15:53.986508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-07-24 22:15:53.986519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-07-24 22:15:53.986676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-07-24 22:15:53.986688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-07-24 22:15:53.986985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-07-24 22:15:53.986998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-07-24 22:15:53.987286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-07-24 22:15:53.987298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-07-24 22:15:53.987492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-07-24 22:15:53.987504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-07-24 22:15:53.987734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-07-24 22:15:53.987746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-07-24 22:15:53.987992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-07-24 22:15:53.988004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-07-24 22:15:53.988221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-07-24 22:15:53.988233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-07-24 22:15:53.988457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-07-24 22:15:53.988470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-07-24 22:15:53.988682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-07-24 22:15:53.988694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.929 [2024-07-24 22:15:53.988918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-07-24 22:15:53.988930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-07-24 22:15:53.989147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-07-24 22:15:53.989160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-07-24 22:15:53.989377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-07-24 22:15:53.989389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-07-24 22:15:53.989682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-07-24 22:15:53.989694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-07-24 22:15:53.989928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-07-24 22:15:53.989941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-07-24 22:15:53.990115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-07-24 22:15:53.990127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-07-24 22:15:53.990297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-07-24 22:15:53.990309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-07-24 22:15:53.990484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-07-24 22:15:53.990496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-07-24 22:15:53.990718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-07-24 22:15:53.990730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-07-24 22:15:53.990951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-07-24 22:15:53.990964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-07-24 22:15:53.991202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-07-24 22:15:53.991215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-07-24 22:15:53.991444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-07-24 22:15:53.991457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-07-24 22:15:53.991673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-07-24 22:15:53.991685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-07-24 22:15:53.991920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-07-24 22:15:53.991933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-07-24 22:15:53.992185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-07-24 22:15:53.992196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-07-24 22:15:53.992362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-07-24 22:15:53.992374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-07-24 22:15:53.992545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-07-24 22:15:53.992559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-07-24 22:15:53.992732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-07-24 22:15:53.992744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-07-24 22:15:53.992906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-07-24 22:15:53.992918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-07-24 22:15:53.993136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-07-24 22:15:53.993149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-07-24 22:15:53.993389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-07-24 22:15:53.993401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-07-24 22:15:53.993634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-07-24 22:15:53.993646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-07-24 22:15:53.993934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-07-24 22:15:53.993946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-07-24 22:15:53.994170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-07-24 22:15:53.994182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-07-24 22:15:53.994432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-07-24 22:15:53.994444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-07-24 22:15:53.994741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-07-24 22:15:53.994753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-07-24 22:15:53.995044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-07-24 22:15:53.995056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-07-24 22:15:53.995154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-07-24 22:15:53.995166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-07-24 22:15:53.995333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-07-24 22:15:53.995346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-07-24 22:15:53.995604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-07-24 22:15:53.995616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-07-24 22:15:53.995783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-07-24 22:15:53.995796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-07-24 22:15:53.995895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-07-24 22:15:53.995907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-07-24 22:15:53.996085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-07-24 22:15:53.996097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-07-24 22:15:53.996244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-07-24 22:15:53.996256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-07-24 22:15:53.996425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-07-24 22:15:53.996437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-07-24 22:15:53.996723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-07-24 22:15:53.996735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-07-24 22:15:53.996934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-07-24 22:15:53.996946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-07-24 22:15:53.997095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-07-24 22:15:53.997107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-07-24 22:15:53.997204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-07-24 22:15:53.997216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-07-24 22:15:53.997506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-07-24 22:15:53.997518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-07-24 22:15:53.997672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-07-24 22:15:53.997684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-07-24 22:15:53.997900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-07-24 22:15:53.997912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-07-24 22:15:53.998148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-07-24 22:15:53.998160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-07-24 22:15:53.998454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-07-24 22:15:53.998466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-07-24 22:15:53.998562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-07-24 22:15:53.998574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-07-24 22:15:53.998738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-07-24 22:15:53.998750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-07-24 22:15:53.998969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-07-24 22:15:53.998981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-07-24 22:15:53.999301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-07-24 22:15:53.999313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-07-24 22:15:53.999572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-07-24 22:15:53.999583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-07-24 22:15:53.999814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-07-24 22:15:53.999827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-07-24 22:15:54.000143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-07-24 22:15:54.000155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-07-24 22:15:54.000302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-07-24 22:15:54.000314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-07-24 22:15:54.000624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-07-24 22:15:54.000637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-07-24 22:15:54.000874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-07-24 22:15:54.000886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-07-24 22:15:54.001123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-07-24 22:15:54.001135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-07-24 22:15:54.001324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-07-24 22:15:54.001343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-07-24 22:15:54.001430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-07-24 22:15:54.001445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-07-24 22:15:54.001663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-07-24 22:15:54.001675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-07-24 22:15:54.001924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-07-24 22:15:54.001937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-07-24 22:15:54.002106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-07-24 22:15:54.002118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-07-24 22:15:54.002354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-07-24 22:15:54.002366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-07-24 22:15:54.002534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-07-24 22:15:54.002546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-07-24 22:15:54.002675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-07-24 22:15:54.002687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-07-24 22:15:54.002913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-07-24 22:15:54.002927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-07-24 22:15:54.003097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-07-24 22:15:54.003109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-07-24 22:15:54.003270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-07-24 22:15:54.003282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.931 [2024-07-24 22:15:54.003566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-07-24 22:15:54.003578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-07-24 22:15:54.003800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-07-24 22:15:54.003812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-07-24 22:15:54.004031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-07-24 22:15:54.004044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-07-24 22:15:54.004169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-07-24 22:15:54.004181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-07-24 22:15:54.004495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-07-24 22:15:54.004507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-07-24 22:15:54.004725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-07-24 22:15:54.004738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-07-24 22:15:54.005033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-07-24 22:15:54.005045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-07-24 22:15:54.005205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-07-24 22:15:54.005217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-07-24 22:15:54.005524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-07-24 22:15:54.005537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-07-24 22:15:54.005774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-07-24 22:15:54.005798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-07-24 22:15:54.005969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-07-24 22:15:54.005981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-07-24 22:15:54.006286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-07-24 22:15:54.006299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-07-24 22:15:54.006606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-07-24 22:15:54.006618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-07-24 22:15:54.006770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-07-24 22:15:54.006783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-07-24 22:15:54.007036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-07-24 22:15:54.007048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-07-24 22:15:54.007294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-07-24 22:15:54.007306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-07-24 22:15:54.007521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-07-24 22:15:54.007534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-07-24 22:15:54.007786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-07-24 22:15:54.007820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d6c000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-07-24 22:15:54.008029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-07-24 22:15:54.008057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-07-24 22:15:54.008234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-07-24 22:15:54.008252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-07-24 22:15:54.008358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-07-24 22:15:54.008375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-07-24 22:15:54.008567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-07-24 22:15:54.008583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-07-24 22:15:54.008679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-07-24 22:15:54.008696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-07-24 22:15:54.008882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-07-24 22:15:54.008896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-07-24 22:15:54.009063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-07-24 22:15:54.009076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-07-24 22:15:54.009393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-07-24 22:15:54.009405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-07-24 22:15:54.009633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-07-24 22:15:54.009645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-07-24 22:15:54.009892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-07-24 22:15:54.009904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-07-24 22:15:54.010083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-07-24 22:15:54.010095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-07-24 22:15:54.010408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-07-24 22:15:54.010419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-07-24 22:15:54.010734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-07-24 22:15:54.010746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-07-24 22:15:54.010923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-07-24 22:15:54.010935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-07-24 22:15:54.011220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-07-24 22:15:54.011232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-07-24 22:15:54.011487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-07-24 22:15:54.011499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-07-24 22:15:54.011732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-07-24 22:15:54.011744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-07-24 22:15:54.012059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-07-24 22:15:54.012071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-07-24 22:15:54.012381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-07-24 22:15:54.012393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-07-24 22:15:54.012618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-07-24 22:15:54.012630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-07-24 22:15:54.012958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-07-24 22:15:54.012970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-07-24 22:15:54.013190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-07-24 22:15:54.013202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-07-24 22:15:54.013464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-07-24 22:15:54.013476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-07-24 22:15:54.013650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-07-24 22:15:54.013662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-07-24 22:15:54.013826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-07-24 22:15:54.013838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-07-24 22:15:54.014067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-07-24 22:15:54.014079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-07-24 22:15:54.014314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-07-24 22:15:54.014326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-07-24 22:15:54.014567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-07-24 22:15:54.014579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-07-24 22:15:54.014815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-07-24 22:15:54.014827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-07-24 22:15:54.015080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-07-24 22:15:54.015092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-07-24 22:15:54.015376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-07-24 22:15:54.015388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-07-24 22:15:54.015632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-07-24 22:15:54.015644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-07-24 22:15:54.015895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-07-24 22:15:54.015907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-07-24 22:15:54.016163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-07-24 22:15:54.016176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-07-24 22:15:54.016415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-07-24 22:15:54.016427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-07-24 22:15:54.016733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-07-24 22:15:54.016746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-07-24 22:15:54.017020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-07-24 22:15:54.017033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-07-24 22:15:54.017198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-07-24 22:15:54.017211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-07-24 22:15:54.017382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-07-24 22:15:54.017394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-07-24 22:15:54.017634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-07-24 22:15:54.017647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-07-24 22:15:54.017815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-07-24 22:15:54.017827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-07-24 22:15:54.018060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-07-24 22:15:54.018072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-07-24 22:15:54.018305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-07-24 22:15:54.018317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.933 [2024-07-24 22:15:54.018485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-07-24 22:15:54.018496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-07-24 22:15:54.018655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-07-24 22:15:54.018667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-07-24 22:15:54.018843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-07-24 22:15:54.018855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-07-24 22:15:54.019087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-07-24 22:15:54.019099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-07-24 22:15:54.019407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-07-24 22:15:54.019420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-07-24 22:15:54.019665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-07-24 22:15:54.019677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-07-24 22:15:54.019850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-07-24 22:15:54.019863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-07-24 22:15:54.020083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-07-24 22:15:54.020096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-07-24 22:15:54.020342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-07-24 22:15:54.020355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-07-24 22:15:54.020534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-07-24 22:15:54.020547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-07-24 22:15:54.020789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-07-24 22:15:54.020801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-07-24 22:15:54.021085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-07-24 22:15:54.021098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-07-24 22:15:54.021341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-07-24 22:15:54.021353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-07-24 22:15:54.021566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-07-24 22:15:54.021578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-07-24 22:15:54.021726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-07-24 22:15:54.021739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-07-24 22:15:54.022006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-07-24 22:15:54.022018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-07-24 22:15:54.022326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-07-24 22:15:54.022338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-07-24 22:15:54.022575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-07-24 22:15:54.022586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-07-24 22:15:54.022863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-07-24 22:15:54.022876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-07-24 22:15:54.023163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-07-24 22:15:54.023175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-07-24 22:15:54.023436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-07-24 22:15:54.023448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-07-24 22:15:54.023623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-07-24 22:15:54.023635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-07-24 22:15:54.023882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-07-24 22:15:54.023894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-07-24 22:15:54.023996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-07-24 22:15:54.024009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-07-24 22:15:54.024299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-07-24 22:15:54.024311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-07-24 22:15:54.024606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-07-24 22:15:54.024617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-07-24 22:15:54.024850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-07-24 22:15:54.024863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-07-24 22:15:54.025102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-07-24 22:15:54.025114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-07-24 22:15:54.025354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-07-24 22:15:54.025366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-07-24 22:15:54.025595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-07-24 22:15:54.025607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-07-24 22:15:54.025776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-07-24 22:15:54.025788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-07-24 22:15:54.025909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-07-24 22:15:54.025921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.934 [2024-07-24 22:15:54.026149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-07-24 22:15:54.026161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-07-24 22:15:54.026445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-07-24 22:15:54.026457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-07-24 22:15:54.026689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-07-24 22:15:54.026701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-07-24 22:15:54.026829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-07-24 22:15:54.026841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-07-24 22:15:54.027074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-07-24 22:15:54.027087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-07-24 22:15:54.027307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-07-24 22:15:54.027319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-07-24 22:15:54.027553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-07-24 22:15:54.027566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-07-24 22:15:54.027734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-07-24 22:15:54.027748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-07-24 22:15:54.028054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-07-24 22:15:54.028066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-07-24 22:15:54.028295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-07-24 22:15:54.028307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-07-24 22:15:54.028596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-07-24 22:15:54.028608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-07-24 22:15:54.028887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-07-24 22:15:54.028900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-07-24 22:15:54.029208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-07-24 22:15:54.029220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-07-24 22:15:54.029398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-07-24 22:15:54.029410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-07-24 22:15:54.029644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-07-24 22:15:54.029655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-07-24 22:15:54.029839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-07-24 22:15:54.029851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-07-24 22:15:54.030070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-07-24 22:15:54.030081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-07-24 22:15:54.030256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-07-24 22:15:54.030268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-07-24 22:15:54.030498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-07-24 22:15:54.030510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-07-24 22:15:54.030690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-07-24 22:15:54.030702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-07-24 22:15:54.030918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-07-24 22:15:54.030931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-07-24 22:15:54.031149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-07-24 22:15:54.031161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-07-24 22:15:54.031377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-07-24 22:15:54.031388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-07-24 22:15:54.031618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-07-24 22:15:54.031631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-07-24 22:15:54.031739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-07-24 22:15:54.031751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-07-24 22:15:54.032001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-07-24 22:15:54.032013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-07-24 22:15:54.032175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-07-24 22:15:54.032187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-07-24 22:15:54.032496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-07-24 22:15:54.032509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-07-24 22:15:54.032677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-07-24 22:15:54.032689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-07-24 22:15:54.032919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-07-24 22:15:54.032932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-07-24 22:15:54.033122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-07-24 22:15:54.033134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-07-24 22:15:54.033325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-07-24 22:15:54.033337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-07-24 22:15:54.033577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-07-24 22:15:54.033590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-07-24 22:15:54.033816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-07-24 22:15:54.033829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-07-24 22:15:54.034061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-07-24 22:15:54.034074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-07-24 22:15:54.034308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-07-24 22:15:54.034321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-07-24 22:15:54.034512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-07-24 22:15:54.034524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-07-24 22:15:54.034758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-07-24 22:15:54.034771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-07-24 22:15:54.034988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-07-24 22:15:54.035000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-07-24 22:15:54.035220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-07-24 22:15:54.035232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-07-24 22:15:54.035413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-07-24 22:15:54.035425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-07-24 22:15:54.035581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-07-24 22:15:54.035593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-07-24 22:15:54.035823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-07-24 22:15:54.035835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-07-24 22:15:54.036013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-07-24 22:15:54.036025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-07-24 22:15:54.036242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-07-24 22:15:54.036256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-07-24 22:15:54.036516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-07-24 22:15:54.036529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-07-24 22:15:54.036840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-07-24 22:15:54.036852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-07-24 22:15:54.037024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-07-24 22:15:54.037036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-07-24 22:15:54.037275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-07-24 22:15:54.037287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-07-24 22:15:54.037532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-07-24 22:15:54.037544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-07-24 22:15:54.037780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-07-24 22:15:54.037792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-07-24 22:15:54.037961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-07-24 22:15:54.037973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-07-24 22:15:54.038277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-07-24 22:15:54.038289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-07-24 22:15:54.038459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-07-24 22:15:54.038471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-07-24 22:15:54.038621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-07-24 22:15:54.038633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-07-24 22:15:54.038783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-07-24 22:15:54.038795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-07-24 22:15:54.039028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-07-24 22:15:54.039040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-07-24 22:15:54.039260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-07-24 22:15:54.039272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-07-24 22:15:54.039421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-07-24 22:15:54.039433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-07-24 22:15:54.039620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-07-24 22:15:54.039632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-07-24 22:15:54.039870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-07-24 22:15:54.039883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-07-24 22:15:54.040115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-07-24 22:15:54.040127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-07-24 22:15:54.040365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-07-24 22:15:54.040377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-07-24 22:15:54.040561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-07-24 22:15:54.040573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-07-24 22:15:54.040790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-07-24 22:15:54.040802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-07-24 22:15:54.041032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-07-24 22:15:54.041044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-07-24 22:15:54.041230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-07-24 22:15:54.041242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-07-24 22:15:54.041411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-07-24 22:15:54.041424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-07-24 22:15:54.041700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-07-24 22:15:54.041712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-07-24 22:15:54.041888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-07-24 22:15:54.041900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-07-24 22:15:54.042121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-07-24 22:15:54.042133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-07-24 22:15:54.042421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-07-24 22:15:54.042433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-07-24 22:15:54.042697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-07-24 22:15:54.042708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-07-24 22:15:54.043010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-07-24 22:15:54.043022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-07-24 22:15:54.043255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-07-24 22:15:54.043267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-07-24 22:15:54.043521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-07-24 22:15:54.043533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-07-24 22:15:54.043841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-07-24 22:15:54.043853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-07-24 22:15:54.044000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-07-24 22:15:54.044012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-07-24 22:15:54.044343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-07-24 22:15:54.044355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-07-24 22:15:54.044608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-07-24 22:15:54.044620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-07-24 22:15:54.044873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-07-24 22:15:54.044885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-07-24 22:15:54.045060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-07-24 22:15:54.045073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-07-24 22:15:54.045295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-07-24 22:15:54.045306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-07-24 22:15:54.045569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-07-24 22:15:54.045581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-07-24 22:15:54.045743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-07-24 22:15:54.045758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-07-24 22:15:54.045980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-07-24 22:15:54.045992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-07-24 22:15:54.046138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-07-24 22:15:54.046150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-07-24 22:15:54.046459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-07-24 22:15:54.046471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-07-24 22:15:54.046638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-07-24 22:15:54.046650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-07-24 22:15:54.046863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-07-24 22:15:54.046875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-07-24 22:15:54.047161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-07-24 22:15:54.047173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-07-24 22:15:54.047336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-07-24 22:15:54.047348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-07-24 22:15:54.047529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-07-24 22:15:54.047541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-07-24 22:15:54.047857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-07-24 22:15:54.047869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-07-24 22:15:54.048054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-07-24 22:15:54.048066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-07-24 22:15:54.048225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-07-24 22:15:54.048238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-07-24 22:15:54.048545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-07-24 22:15:54.048557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-07-24 22:15:54.048811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-07-24 22:15:54.048824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-07-24 22:15:54.049016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-07-24 22:15:54.049028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-07-24 22:15:54.049246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-07-24 22:15:54.049258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-07-24 22:15:54.049544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-07-24 22:15:54.049556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-07-24 22:15:54.049736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-07-24 22:15:54.049748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-07-24 22:15:54.049979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-07-24 22:15:54.049991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-07-24 22:15:54.050255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-07-24 22:15:54.050267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-07-24 22:15:54.050501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-07-24 22:15:54.050513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-07-24 22:15:54.050770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-07-24 22:15:54.050783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-07-24 22:15:54.051069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-07-24 22:15:54.051081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-07-24 22:15:54.051263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-07-24 22:15:54.051275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-07-24 22:15:54.051562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-07-24 22:15:54.051574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-07-24 22:15:54.051806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-07-24 22:15:54.051819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-07-24 22:15:54.052119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-07-24 22:15:54.052131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-07-24 22:15:54.052299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-07-24 22:15:54.052311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-07-24 22:15:54.052562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-07-24 22:15:54.052574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-07-24 22:15:54.052790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-07-24 22:15:54.052803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-07-24 22:15:54.053042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-07-24 22:15:54.053054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-07-24 22:15:54.053271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-07-24 22:15:54.053283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-07-24 22:15:54.053500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-07-24 22:15:54.053512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-07-24 22:15:54.053754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-07-24 22:15:54.053766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-07-24 22:15:54.054010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-07-24 22:15:54.054022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-07-24 22:15:54.054178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-07-24 22:15:54.054190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-07-24 22:15:54.054498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-07-24 22:15:54.054510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-07-24 22:15:54.054725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-07-24 22:15:54.054737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-07-24 22:15:54.054961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-07-24 22:15:54.054973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-07-24 22:15:54.055258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-07-24 22:15:54.055271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-07-24 22:15:54.055576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-07-24 22:15:54.055589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-07-24 22:15:54.055893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-07-24 22:15:54.055905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-07-24 22:15:54.056181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-07-24 22:15:54.056193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-07-24 22:15:54.056474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-07-24 22:15:54.056486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-07-24 22:15:54.056801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-07-24 22:15:54.056813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-07-24 22:15:54.057121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-07-24 22:15:54.057133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-07-24 22:15:54.057420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-07-24 22:15:54.057432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-07-24 22:15:54.057721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-07-24 22:15:54.057734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-07-24 22:15:54.057899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-07-24 22:15:54.057911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-07-24 22:15:54.058206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-07-24 22:15:54.058217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-07-24 22:15:54.058471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-07-24 22:15:54.058483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-07-24 22:15:54.058796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-07-24 22:15:54.058808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-07-24 22:15:54.059030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-07-24 22:15:54.059042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-07-24 22:15:54.059271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-07-24 22:15:54.059283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-07-24 22:15:54.059594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-07-24 22:15:54.059606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-07-24 22:15:54.059772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-07-24 22:15:54.059784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-07-24 22:15:54.060001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-07-24 22:15:54.060013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-07-24 22:15:54.060311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-07-24 22:15:54.060323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-07-24 22:15:54.060562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-07-24 22:15:54.060574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-07-24 22:15:54.060764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-07-24 22:15:54.060776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-07-24 22:15:54.060950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-07-24 22:15:54.060962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-07-24 22:15:54.061127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-07-24 22:15:54.061139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-07-24 22:15:54.061436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-07-24 22:15:54.061448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-07-24 22:15:54.061734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-07-24 22:15:54.061746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-07-24 22:15:54.061994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-07-24 22:15:54.062006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-07-24 22:15:54.062304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-07-24 22:15:54.062316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-07-24 22:15:54.062530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-07-24 22:15:54.062542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-07-24 22:15:54.062852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-07-24 22:15:54.062864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-07-24 22:15:54.063092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-07-24 22:15:54.063104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-07-24 22:15:54.063342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-07-24 22:15:54.063354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-07-24 22:15:54.063652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-07-24 22:15:54.063663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-07-24 22:15:54.063990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-07-24 22:15:54.064003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-07-24 22:15:54.064318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-07-24 22:15:54.064330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-07-24 22:15:54.064545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-07-24 22:15:54.064558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-07-24 22:15:54.064731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-07-24 22:15:54.064744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-07-24 22:15:54.064910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-07-24 22:15:54.064922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-07-24 22:15:54.065104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-07-24 22:15:54.065116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-07-24 22:15:54.065426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-07-24 22:15:54.065438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-07-24 22:15:54.065696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-07-24 22:15:54.065709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-07-24 22:15:54.066070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-07-24 22:15:54.066082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-07-24 22:15:54.066348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-07-24 22:15:54.066363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-07-24 22:15:54.066618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-07-24 22:15:54.066632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-07-24 22:15:54.066806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-07-24 22:15:54.066821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-07-24 22:15:54.067008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-07-24 22:15:54.067020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-07-24 22:15:54.067302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-07-24 22:15:54.067316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-07-24 22:15:54.067570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-07-24 22:15:54.067583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-07-24 22:15:54.067824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-07-24 22:15:54.067837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-07-24 22:15:54.068000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-07-24 22:15:54.068013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-07-24 22:15:54.068190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-07-24 22:15:54.068203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-07-24 22:15:54.068429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-07-24 22:15:54.068442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-07-24 22:15:54.068682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-07-24 22:15:54.068695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-07-24 22:15:54.069043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-07-24 22:15:54.069057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-07-24 22:15:54.069342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-07-24 22:15:54.069356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-07-24 22:15:54.069517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-07-24 22:15:54.069530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-07-24 22:15:54.069783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-07-24 22:15:54.069797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-07-24 22:15:54.070027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-07-24 22:15:54.070041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-07-24 22:15:54.070221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-07-24 22:15:54.070235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-07-24 22:15:54.070570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-07-24 22:15:54.070584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-07-24 22:15:54.070843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-07-24 22:15:54.070856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-07-24 22:15:54.071175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-07-24 22:15:54.071188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-07-24 22:15:54.071494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-07-24 22:15:54.071508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-07-24 22:15:54.071770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-07-24 22:15:54.071784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-07-24 22:15:54.071952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-07-24 22:15:54.071965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-07-24 22:15:54.072140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-07-24 22:15:54.072153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-07-24 22:15:54.072402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-07-24 22:15:54.072415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-07-24 22:15:54.072576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-07-24 22:15:54.072588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-07-24 22:15:54.072741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-07-24 22:15:54.072753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-07-24 22:15:54.073039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-07-24 22:15:54.073052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-07-24 22:15:54.073220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-07-24 22:15:54.073233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-07-24 22:15:54.073415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-07-24 22:15:54.073429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-07-24 22:15:54.073734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-07-24 22:15:54.073748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-07-24 22:15:54.073976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-07-24 22:15:54.073989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-07-24 22:15:54.074215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-07-24 22:15:54.074229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-07-24 22:15:54.074401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-07-24 22:15:54.074415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-07-24 22:15:54.074722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-07-24 22:15:54.074736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-07-24 22:15:54.075047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-07-24 22:15:54.075061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-07-24 22:15:54.075298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-07-24 22:15:54.075311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-07-24 22:15:54.075605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-07-24 22:15:54.075618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-07-24 22:15:54.075859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-07-24 22:15:54.075873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-07-24 22:15:54.076030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-07-24 22:15:54.076043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-07-24 22:15:54.076353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-07-24 22:15:54.076368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-07-24 22:15:54.076596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-07-24 22:15:54.076609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-07-24 22:15:54.076898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-07-24 22:15:54.076912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-07-24 22:15:54.077213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-07-24 22:15:54.077227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-07-24 22:15:54.077489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-07-24 22:15:54.077502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-07-24 22:15:54.077733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-07-24 22:15:54.077746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-07-24 22:15:54.077921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-07-24 22:15:54.077934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-07-24 22:15:54.078247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-07-24 22:15:54.078261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-07-24 22:15:54.078441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-07-24 22:15:54.078455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-07-24 22:15:54.078687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-07-24 22:15:54.078701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-07-24 22:15:54.078899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-07-24 22:15:54.078913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-07-24 22:15:54.079141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-07-24 22:15:54.079154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-07-24 22:15:54.079371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-07-24 22:15:54.079384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-07-24 22:15:54.079731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-07-24 22:15:54.079745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-07-24 22:15:54.079975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-07-24 22:15:54.079988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-07-24 22:15:54.080297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-07-24 22:15:54.080310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-07-24 22:15:54.080475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-07-24 22:15:54.080488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-07-24 22:15:54.080729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-07-24 22:15:54.080742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-07-24 22:15:54.080924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-07-24 22:15:54.080938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-07-24 22:15:54.081198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-07-24 22:15:54.081212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-07-24 22:15:54.081499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-07-24 22:15:54.081512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-07-24 22:15:54.081681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-07-24 22:15:54.081694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-07-24 22:15:54.081983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-07-24 22:15:54.081996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-07-24 22:15:54.082227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-07-24 22:15:54.082240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-07-24 22:15:54.082406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-07-24 22:15:54.082420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-07-24 22:15:54.082583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-07-24 22:15:54.082596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-07-24 22:15:54.082763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-07-24 22:15:54.082777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-07-24 22:15:54.083032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-07-24 22:15:54.083045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-07-24 22:15:54.083277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-07-24 22:15:54.083290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-07-24 22:15:54.083520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-07-24 22:15:54.083533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-07-24 22:15:54.083770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-07-24 22:15:54.083784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-07-24 22:15:54.083950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-07-24 22:15:54.083964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-07-24 22:15:54.084178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-07-24 22:15:54.084191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-07-24 22:15:54.084421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-07-24 22:15:54.084435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-07-24 22:15:54.084599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-07-24 22:15:54.084612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-07-24 22:15:54.084830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-07-24 22:15:54.084843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-07-24 22:15:54.085080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-07-24 22:15:54.085094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-07-24 22:15:54.085396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-07-24 22:15:54.085409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-07-24 22:15:54.085720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-07-24 22:15:54.085734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-07-24 22:15:54.085958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-07-24 22:15:54.085971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-07-24 22:15:54.086302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-07-24 22:15:54.086318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-07-24 22:15:54.086631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-07-24 22:15:54.086644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-07-24 22:15:54.086862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-07-24 22:15:54.086876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-07-24 22:15:54.087051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-07-24 22:15:54.087064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-07-24 22:15:54.087227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-07-24 22:15:54.087240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-07-24 22:15:54.087546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-07-24 22:15:54.087560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-07-24 22:15:54.087774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-07-24 22:15:54.087787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-07-24 22:15:54.087886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-07-24 22:15:54.087898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-07-24 22:15:54.088155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-07-24 22:15:54.088168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-07-24 22:15:54.088367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-07-24 22:15:54.088381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-07-24 22:15:54.088609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-07-24 22:15:54.088622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-07-24 22:15:54.088923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-07-24 22:15:54.088937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-07-24 22:15:54.089242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-07-24 22:15:54.089255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-07-24 22:15:54.089421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-07-24 22:15:54.089434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-07-24 22:15:54.089721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-07-24 22:15:54.089736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-07-24 22:15:54.089977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-07-24 22:15:54.089990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-07-24 22:15:54.090181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-07-24 22:15:54.090194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-07-24 22:15:54.090425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-07-24 22:15:54.090438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-07-24 22:15:54.090729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-07-24 22:15:54.090742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-07-24 22:15:54.090922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-07-24 22:15:54.090935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-07-24 22:15:54.091269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-07-24 22:15:54.091282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-07-24 22:15:54.091518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-07-24 22:15:54.091531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-07-24 22:15:54.091816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-07-24 22:15:54.091831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-07-24 22:15:54.092057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-07-24 22:15:54.092070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-07-24 22:15:54.092376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-07-24 22:15:54.092389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-07-24 22:15:54.092644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-07-24 22:15:54.092657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-07-24 22:15:54.092965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-07-24 22:15:54.092979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-07-24 22:15:54.093206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-07-24 22:15:54.093220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-07-24 22:15:54.093448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-07-24 22:15:54.093462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-07-24 22:15:54.093755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-07-24 22:15:54.093768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-07-24 22:15:54.094017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-07-24 22:15:54.094031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-07-24 22:15:54.094246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-07-24 22:15:54.094259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-07-24 22:15:54.094569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-07-24 22:15:54.094582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-07-24 22:15:54.094764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-07-24 22:15:54.094777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-07-24 22:15:54.095016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-07-24 22:15:54.095029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-07-24 22:15:54.095323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-07-24 22:15:54.095336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-07-24 22:15:54.095553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-07-24 22:15:54.095567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-07-24 22:15:54.095795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-07-24 22:15:54.095809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-07-24 22:15:54.096043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-07-24 22:15:54.096056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-07-24 22:15:54.096287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-07-24 22:15:54.096300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-07-24 22:15:54.096584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-07-24 22:15:54.096601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-07-24 22:15:54.096844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-07-24 22:15:54.096857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-07-24 22:15:54.097081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-07-24 22:15:54.097095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-07-24 22:15:54.097194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-07-24 22:15:54.097207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-07-24 22:15:54.097491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-07-24 22:15:54.097504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-07-24 22:15:54.097604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-07-24 22:15:54.097617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-07-24 22:15:54.097767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-07-24 22:15:54.097781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-07-24 22:15:54.098100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-07-24 22:15:54.098114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-07-24 22:15:54.098269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-07-24 22:15:54.098282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-07-24 22:15:54.098505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-07-24 22:15:54.098518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-07-24 22:15:54.098708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-07-24 22:15:54.098725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-07-24 22:15:54.098902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-07-24 22:15:54.098916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-07-24 22:15:54.099131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-07-24 22:15:54.099145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-07-24 22:15:54.099382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-07-24 22:15:54.099395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-07-24 22:15:54.099708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-07-24 22:15:54.099726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-07-24 22:15:54.099907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-07-24 22:15:54.099921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-07-24 22:15:54.100198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-07-24 22:15:54.100211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-07-24 22:15:54.100433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-07-24 22:15:54.100446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-07-24 22:15:54.100668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-07-24 22:15:54.100682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-07-24 22:15:54.101008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-07-24 22:15:54.101022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-07-24 22:15:54.101246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-07-24 22:15:54.101259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-07-24 22:15:54.101487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-07-24 22:15:54.101500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-07-24 22:15:54.101729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-07-24 22:15:54.101743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-07-24 22:15:54.101904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-07-24 22:15:54.101917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-07-24 22:15:54.102145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-07-24 22:15:54.102159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-07-24 22:15:54.102443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-07-24 22:15:54.102456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-07-24 22:15:54.102627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-07-24 22:15:54.102640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-07-24 22:15:54.102871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-07-24 22:15:54.102885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:15.213 [2024-07-24 22:15:54.103037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.213 [2024-07-24 22:15:54.103051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.213 qpair failed and we were unable to recover it. 00:28:15.213 [2024-07-24 22:15:54.103208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.213 [2024-07-24 22:15:54.103222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.213 qpair failed and we were unable to recover it. 00:28:15.213 [2024-07-24 22:15:54.103463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.213 [2024-07-24 22:15:54.103476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.213 qpair failed and we were unable to recover it. 00:28:15.213 [2024-07-24 22:15:54.103578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.213 [2024-07-24 22:15:54.103591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.213 qpair failed and we were unable to recover it. 00:28:15.213 [2024-07-24 22:15:54.103815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.213 [2024-07-24 22:15:54.103828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.213 qpair failed and we were unable to recover it. 00:28:15.213 [2024-07-24 22:15:54.104169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.213 [2024-07-24 22:15:54.104183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.213 qpair failed and we were unable to recover it. 00:28:15.213 [2024-07-24 22:15:54.104332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.213 [2024-07-24 22:15:54.104344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.213 qpair failed and we were unable to recover it. 00:28:15.213 [2024-07-24 22:15:54.104533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.213 [2024-07-24 22:15:54.104546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.213 qpair failed and we were unable to recover it. 00:28:15.213 [2024-07-24 22:15:54.104726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.213 [2024-07-24 22:15:54.104739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.213 qpair failed and we were unable to recover it. 00:28:15.213 [2024-07-24 22:15:54.104913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.213 [2024-07-24 22:15:54.104926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.213 qpair failed and we were unable to recover it. 00:28:15.213 [2024-07-24 22:15:54.105128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.213 [2024-07-24 22:15:54.105141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.213 qpair failed and we were unable to recover it. 00:28:15.213 [2024-07-24 22:15:54.105329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.213 [2024-07-24 22:15:54.105342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.213 qpair failed and we were unable to recover it. 00:28:15.213 [2024-07-24 22:15:54.105572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.213 [2024-07-24 22:15:54.105587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.213 qpair failed and we were unable to recover it. 00:28:15.213 [2024-07-24 22:15:54.105879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.213 [2024-07-24 22:15:54.105892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.213 qpair failed and we were unable to recover it. 00:28:15.213 [2024-07-24 22:15:54.106059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.213 [2024-07-24 22:15:54.106073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.213 qpair failed and we were unable to recover it. 00:28:15.213 [2024-07-24 22:15:54.106315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.213 [2024-07-24 22:15:54.106328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.213 qpair failed and we were unable to recover it. 00:28:15.213 [2024-07-24 22:15:54.106614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.214 [2024-07-24 22:15:54.106628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.214 qpair failed and we were unable to recover it. 00:28:15.214 [2024-07-24 22:15:54.106897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.214 [2024-07-24 22:15:54.106910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.214 qpair failed and we were unable to recover it. 00:28:15.214 [2024-07-24 22:15:54.107158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.214 [2024-07-24 22:15:54.107172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.214 qpair failed and we were unable to recover it. 00:28:15.214 [2024-07-24 22:15:54.107459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.214 [2024-07-24 22:15:54.107472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.214 qpair failed and we were unable to recover it. 00:28:15.214 [2024-07-24 22:15:54.107635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.214 [2024-07-24 22:15:54.107648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.214 qpair failed and we were unable to recover it. 00:28:15.214 [2024-07-24 22:15:54.107881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.214 [2024-07-24 22:15:54.107894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.214 qpair failed and we were unable to recover it. 00:28:15.214 [2024-07-24 22:15:54.108015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.214 [2024-07-24 22:15:54.108028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.214 qpair failed and we were unable to recover it. 00:28:15.214 [2024-07-24 22:15:54.108282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.214 [2024-07-24 22:15:54.108295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.214 qpair failed and we were unable to recover it. 00:28:15.214 [2024-07-24 22:15:54.108612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.214 [2024-07-24 22:15:54.108626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.214 qpair failed and we were unable to recover it. 00:28:15.214 [2024-07-24 22:15:54.108725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.214 [2024-07-24 22:15:54.108738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.214 qpair failed and we were unable to recover it. 00:28:15.214 [2024-07-24 22:15:54.109025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.214 [2024-07-24 22:15:54.109039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.214 qpair failed and we were unable to recover it. 00:28:15.214 [2024-07-24 22:15:54.109261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.214 [2024-07-24 22:15:54.109274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.214 qpair failed and we were unable to recover it. 00:28:15.214 [2024-07-24 22:15:54.109581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.214 [2024-07-24 22:15:54.109596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.214 qpair failed and we were unable to recover it. 00:28:15.214 [2024-07-24 22:15:54.109755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.214 [2024-07-24 22:15:54.109769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.214 qpair failed and we were unable to recover it. 00:28:15.214 [2024-07-24 22:15:54.109989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.214 [2024-07-24 22:15:54.110003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.214 qpair failed and we were unable to recover it. 00:28:15.214 [2024-07-24 22:15:54.110153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.214 [2024-07-24 22:15:54.110167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.214 qpair failed and we were unable to recover it. 00:28:15.214 [2024-07-24 22:15:54.110399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.214 [2024-07-24 22:15:54.110413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.214 qpair failed and we were unable to recover it. 00:28:15.214 [2024-07-24 22:15:54.110643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.214 [2024-07-24 22:15:54.110656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.214 qpair failed and we were unable to recover it. 00:28:15.214 [2024-07-24 22:15:54.110942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.214 [2024-07-24 22:15:54.110957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.214 qpair failed and we were unable to recover it. 00:28:15.214 [2024-07-24 22:15:54.111122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.214 [2024-07-24 22:15:54.111135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.214 qpair failed and we were unable to recover it. 00:28:15.214 [2024-07-24 22:15:54.111366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.214 [2024-07-24 22:15:54.111380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.214 qpair failed and we were unable to recover it. 00:28:15.214 [2024-07-24 22:15:54.111558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.214 [2024-07-24 22:15:54.111572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.214 qpair failed and we were unable to recover it. 00:28:15.214 [2024-07-24 22:15:54.111862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.214 [2024-07-24 22:15:54.111875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.214 qpair failed and we were unable to recover it. 00:28:15.214 [2024-07-24 22:15:54.112168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.214 [2024-07-24 22:15:54.112181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.214 qpair failed and we were unable to recover it. 00:28:15.214 [2024-07-24 22:15:54.112464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.214 [2024-07-24 22:15:54.112479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.214 qpair failed and we were unable to recover it. 00:28:15.214 [2024-07-24 22:15:54.112694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.214 [2024-07-24 22:15:54.112707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.214 qpair failed and we were unable to recover it. 00:28:15.214 [2024-07-24 22:15:54.112937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.214 [2024-07-24 22:15:54.112951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.214 qpair failed and we were unable to recover it. 00:28:15.214 [2024-07-24 22:15:54.113182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.214 [2024-07-24 22:15:54.113196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.215 qpair failed and we were unable to recover it. 00:28:15.215 [2024-07-24 22:15:54.113427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.215 [2024-07-24 22:15:54.113441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.215 qpair failed and we were unable to recover it. 00:28:15.215 [2024-07-24 22:15:54.113662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.215 [2024-07-24 22:15:54.113676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.215 qpair failed and we were unable to recover it. 00:28:15.215 [2024-07-24 22:15:54.113915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.215 [2024-07-24 22:15:54.113928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.215 qpair failed and we were unable to recover it. 00:28:15.215 [2024-07-24 22:15:54.114158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.215 [2024-07-24 22:15:54.114172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.215 qpair failed and we were unable to recover it. 00:28:15.215 [2024-07-24 22:15:54.114391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.215 [2024-07-24 22:15:54.114405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.215 qpair failed and we were unable to recover it. 00:28:15.215 [2024-07-24 22:15:54.114711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.215 [2024-07-24 22:15:54.114729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.215 qpair failed and we were unable to recover it. 00:28:15.215 [2024-07-24 22:15:54.114962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.215 [2024-07-24 22:15:54.114976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.215 qpair failed and we were unable to recover it. 00:28:15.215 [2024-07-24 22:15:54.115285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.215 [2024-07-24 22:15:54.115298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.215 qpair failed and we were unable to recover it. 00:28:15.215 [2024-07-24 22:15:54.115470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.215 [2024-07-24 22:15:54.115483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.215 qpair failed and we were unable to recover it. 00:28:15.215 [2024-07-24 22:15:54.115771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.215 [2024-07-24 22:15:54.115785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.215 qpair failed and we were unable to recover it. 00:28:15.215 [2024-07-24 22:15:54.115968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.215 [2024-07-24 22:15:54.115982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.215 qpair failed and we were unable to recover it. 00:28:15.215 [2024-07-24 22:15:54.116202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.215 [2024-07-24 22:15:54.116216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.215 qpair failed and we were unable to recover it. 00:28:15.215 [2024-07-24 22:15:54.116379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.215 [2024-07-24 22:15:54.116392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.215 qpair failed and we were unable to recover it. 00:28:15.215 [2024-07-24 22:15:54.116678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.215 [2024-07-24 22:15:54.116692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.215 qpair failed and we were unable to recover it. 00:28:15.215 [2024-07-24 22:15:54.117016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.215 [2024-07-24 22:15:54.117029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.215 qpair failed and we were unable to recover it. 00:28:15.215 [2024-07-24 22:15:54.117249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.215 [2024-07-24 22:15:54.117262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.215 qpair failed and we were unable to recover it. 00:28:15.215 [2024-07-24 22:15:54.117475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.215 [2024-07-24 22:15:54.117489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.215 qpair failed and we were unable to recover it. 00:28:15.215 [2024-07-24 22:15:54.117728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.215 [2024-07-24 22:15:54.117742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.215 qpair failed and we were unable to recover it. 00:28:15.215 [2024-07-24 22:15:54.117957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.215 [2024-07-24 22:15:54.117971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.215 qpair failed and we were unable to recover it. 00:28:15.215 [2024-07-24 22:15:54.118133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.215 [2024-07-24 22:15:54.118147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.215 qpair failed and we were unable to recover it. 00:28:15.215 [2024-07-24 22:15:54.118462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.215 [2024-07-24 22:15:54.118476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.215 qpair failed and we were unable to recover it. 00:28:15.215 [2024-07-24 22:15:54.118659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.215 [2024-07-24 22:15:54.118673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.215 qpair failed and we were unable to recover it. 00:28:15.215 [2024-07-24 22:15:54.118827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.215 [2024-07-24 22:15:54.118841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.215 qpair failed and we were unable to recover it. 00:28:15.215 [2024-07-24 22:15:54.119081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.215 [2024-07-24 22:15:54.119095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.215 qpair failed and we were unable to recover it. 00:28:15.215 [2024-07-24 22:15:54.119270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.215 [2024-07-24 22:15:54.119284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.215 qpair failed and we were unable to recover it. 00:28:15.215 [2024-07-24 22:15:54.119522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.215 [2024-07-24 22:15:54.119535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.215 qpair failed and we were unable to recover it. 00:28:15.215 [2024-07-24 22:15:54.119692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.215 [2024-07-24 22:15:54.119707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.215 qpair failed and we were unable to recover it. 00:28:15.215 [2024-07-24 22:15:54.119940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.215 [2024-07-24 22:15:54.119954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.215 qpair failed and we were unable to recover it. 00:28:15.216 [2024-07-24 22:15:54.120262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.216 [2024-07-24 22:15:54.120276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.216 qpair failed and we were unable to recover it. 00:28:15.216 [2024-07-24 22:15:54.120560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.216 [2024-07-24 22:15:54.120574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.216 qpair failed and we were unable to recover it. 00:28:15.216 [2024-07-24 22:15:54.120807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.216 [2024-07-24 22:15:54.120821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.216 qpair failed and we were unable to recover it. 00:28:15.216 [2024-07-24 22:15:54.121160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.216 [2024-07-24 22:15:54.121174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.216 qpair failed and we were unable to recover it. 00:28:15.216 [2024-07-24 22:15:54.121354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.216 [2024-07-24 22:15:54.121368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.216 qpair failed and we were unable to recover it. 00:28:15.216 [2024-07-24 22:15:54.121681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.216 [2024-07-24 22:15:54.121695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.216 qpair failed and we were unable to recover it. 00:28:15.216 [2024-07-24 22:15:54.122028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.216 [2024-07-24 22:15:54.122042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.216 qpair failed and we were unable to recover it. 00:28:15.216 [2024-07-24 22:15:54.122226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.216 [2024-07-24 22:15:54.122241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.216 qpair failed and we were unable to recover it. 00:28:15.216 [2024-07-24 22:15:54.122480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.216 [2024-07-24 22:15:54.122494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.216 qpair failed and we were unable to recover it. 00:28:15.216 [2024-07-24 22:15:54.122594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.216 [2024-07-24 22:15:54.122607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.216 qpair failed and we were unable to recover it. 00:28:15.216 [2024-07-24 22:15:54.122913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.216 [2024-07-24 22:15:54.122928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.216 qpair failed and we were unable to recover it. 00:28:15.216 [2024-07-24 22:15:54.123158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.216 [2024-07-24 22:15:54.123172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.216 qpair failed and we were unable to recover it. 00:28:15.216 [2024-07-24 22:15:54.123401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.216 [2024-07-24 22:15:54.123414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.216 qpair failed and we were unable to recover it. 00:28:15.216 [2024-07-24 22:15:54.123644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.216 [2024-07-24 22:15:54.123657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.216 qpair failed and we were unable to recover it. 00:28:15.216 [2024-07-24 22:15:54.123891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.216 [2024-07-24 22:15:54.123904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.216 qpair failed and we were unable to recover it. 00:28:15.216 [2024-07-24 22:15:54.124140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.216 [2024-07-24 22:15:54.124153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.216 qpair failed and we were unable to recover it. 00:28:15.216 [2024-07-24 22:15:54.124410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.216 [2024-07-24 22:15:54.124423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.216 qpair failed and we were unable to recover it. 00:28:15.216 [2024-07-24 22:15:54.124708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.216 [2024-07-24 22:15:54.124726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.216 qpair failed and we were unable to recover it. 00:28:15.216 [2024-07-24 22:15:54.124892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.216 [2024-07-24 22:15:54.124906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.216 qpair failed and we were unable to recover it. 00:28:15.216 [2024-07-24 22:15:54.125221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.216 [2024-07-24 22:15:54.125234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.216 qpair failed and we were unable to recover it. 00:28:15.216 [2024-07-24 22:15:54.125386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.216 [2024-07-24 22:15:54.125399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.216 qpair failed and we were unable to recover it. 00:28:15.216 [2024-07-24 22:15:54.125618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.216 [2024-07-24 22:15:54.125631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.216 qpair failed and we were unable to recover it. 00:28:15.216 [2024-07-24 22:15:54.125872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.216 [2024-07-24 22:15:54.125886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.216 qpair failed and we were unable to recover it. 00:28:15.216 [2024-07-24 22:15:54.126200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.216 [2024-07-24 22:15:54.126214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.216 qpair failed and we were unable to recover it. 00:28:15.216 [2024-07-24 22:15:54.126435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.216 [2024-07-24 22:15:54.126448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.216 qpair failed and we were unable to recover it. 00:28:15.216 [2024-07-24 22:15:54.126735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.216 [2024-07-24 22:15:54.126748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.216 qpair failed and we were unable to recover it. 00:28:15.216 [2024-07-24 22:15:54.127036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.217 [2024-07-24 22:15:54.127050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.217 qpair failed and we were unable to recover it. 00:28:15.217 [2024-07-24 22:15:54.127357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.217 [2024-07-24 22:15:54.127370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.217 qpair failed and we were unable to recover it. 00:28:15.217 [2024-07-24 22:15:54.127606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.217 [2024-07-24 22:15:54.127620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.217 qpair failed and we were unable to recover it. 00:28:15.217 [2024-07-24 22:15:54.127953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.217 [2024-07-24 22:15:54.127966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.217 qpair failed and we were unable to recover it. 00:28:15.217 [2024-07-24 22:15:54.128146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.217 [2024-07-24 22:15:54.128160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.217 qpair failed and we were unable to recover it. 00:28:15.217 [2024-07-24 22:15:54.128320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.217 [2024-07-24 22:15:54.128334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.217 qpair failed and we were unable to recover it. 00:28:15.217 [2024-07-24 22:15:54.128504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.217 [2024-07-24 22:15:54.128518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.217 qpair failed and we were unable to recover it. 00:28:15.217 [2024-07-24 22:15:54.128694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.217 [2024-07-24 22:15:54.128709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.217 qpair failed and we were unable to recover it. 00:28:15.217 [2024-07-24 22:15:54.128939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.217 [2024-07-24 22:15:54.128953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.217 qpair failed and we were unable to recover it. 00:28:15.217 [2024-07-24 22:15:54.129284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.217 [2024-07-24 22:15:54.129297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.217 qpair failed and we were unable to recover it. 00:28:15.217 [2024-07-24 22:15:54.129556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.217 [2024-07-24 22:15:54.129570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.217 qpair failed and we were unable to recover it. 00:28:15.217 [2024-07-24 22:15:54.129723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.217 [2024-07-24 22:15:54.129736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.217 qpair failed and we were unable to recover it. 00:28:15.217 [2024-07-24 22:15:54.129976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.217 [2024-07-24 22:15:54.129990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.217 qpair failed and we were unable to recover it. 00:28:15.217 [2024-07-24 22:15:54.130172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.217 [2024-07-24 22:15:54.130185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.217 qpair failed and we were unable to recover it. 00:28:15.217 [2024-07-24 22:15:54.130418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.217 [2024-07-24 22:15:54.130431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.217 qpair failed and we were unable to recover it. 00:28:15.217 [2024-07-24 22:15:54.130667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.217 [2024-07-24 22:15:54.130681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.217 qpair failed and we were unable to recover it. 00:28:15.217 [2024-07-24 22:15:54.130966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.217 [2024-07-24 22:15:54.130980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.217 qpair failed and we were unable to recover it. 00:28:15.217 [2024-07-24 22:15:54.131134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.217 [2024-07-24 22:15:54.131148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.217 qpair failed and we were unable to recover it. 00:28:15.217 [2024-07-24 22:15:54.131431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.217 [2024-07-24 22:15:54.131444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.217 qpair failed and we were unable to recover it. 00:28:15.217 [2024-07-24 22:15:54.131709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.217 [2024-07-24 22:15:54.131727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.217 qpair failed and we were unable to recover it. 00:28:15.217 [2024-07-24 22:15:54.131910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.217 [2024-07-24 22:15:54.131924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.217 qpair failed and we were unable to recover it. 00:28:15.217 [2024-07-24 22:15:54.132255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.217 [2024-07-24 22:15:54.132271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.217 qpair failed and we were unable to recover it. 00:28:15.217 [2024-07-24 22:15:54.132512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.217 [2024-07-24 22:15:54.132526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.217 qpair failed and we were unable to recover it. 00:28:15.217 [2024-07-24 22:15:54.132754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.217 [2024-07-24 22:15:54.132768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.217 qpair failed and we were unable to recover it. 00:28:15.217 [2024-07-24 22:15:54.133052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.217 [2024-07-24 22:15:54.133065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.217 qpair failed and we were unable to recover it. 00:28:15.217 [2024-07-24 22:15:54.133169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.217 [2024-07-24 22:15:54.133182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.217 qpair failed and we were unable to recover it. 00:28:15.217 [2024-07-24 22:15:54.133482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.217 [2024-07-24 22:15:54.133496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.217 qpair failed and we were unable to recover it. 00:28:15.217 [2024-07-24 22:15:54.133804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.217 [2024-07-24 22:15:54.133818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.217 qpair failed and we were unable to recover it. 00:28:15.217 [2024-07-24 22:15:54.134128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.217 [2024-07-24 22:15:54.134143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.217 qpair failed and we were unable to recover it. 00:28:15.217 [2024-07-24 22:15:54.134374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.218 [2024-07-24 22:15:54.134389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.218 qpair failed and we were unable to recover it. 00:28:15.218 [2024-07-24 22:15:54.134683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.218 [2024-07-24 22:15:54.134697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.218 qpair failed and we were unable to recover it. 00:28:15.218 [2024-07-24 22:15:54.134918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.218 [2024-07-24 22:15:54.134932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.218 qpair failed and we were unable to recover it. 00:28:15.218 [2024-07-24 22:15:54.135150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.218 [2024-07-24 22:15:54.135164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.218 qpair failed and we were unable to recover it. 00:28:15.218 [2024-07-24 22:15:54.135363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.218 [2024-07-24 22:15:54.135376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.218 qpair failed and we were unable to recover it. 00:28:15.218 [2024-07-24 22:15:54.135707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.218 [2024-07-24 22:15:54.135725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.218 qpair failed and we were unable to recover it. 00:28:15.218 [2024-07-24 22:15:54.135984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.218 [2024-07-24 22:15:54.135998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.218 qpair failed and we were unable to recover it. 00:28:15.218 [2024-07-24 22:15:54.136149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.218 [2024-07-24 22:15:54.136162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.218 qpair failed and we were unable to recover it. 00:28:15.218 [2024-07-24 22:15:54.136475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.218 [2024-07-24 22:15:54.136489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.218 qpair failed and we were unable to recover it. 00:28:15.218 [2024-07-24 22:15:54.136723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.218 [2024-07-24 22:15:54.136736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.218 qpair failed and we were unable to recover it. 00:28:15.218 [2024-07-24 22:15:54.136952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.218 [2024-07-24 22:15:54.136967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.218 qpair failed and we were unable to recover it. 00:28:15.218 [2024-07-24 22:15:54.137147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.218 [2024-07-24 22:15:54.137160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.218 qpair failed and we were unable to recover it. 00:28:15.218 [2024-07-24 22:15:54.137400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.218 [2024-07-24 22:15:54.137413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.218 qpair failed and we were unable to recover it. 00:28:15.218 [2024-07-24 22:15:54.137631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.218 [2024-07-24 22:15:54.137646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.218 qpair failed and we were unable to recover it. 00:28:15.218 [2024-07-24 22:15:54.137866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.218 [2024-07-24 22:15:54.137880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.218 qpair failed and we were unable to recover it. 00:28:15.218 [2024-07-24 22:15:54.138099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.218 [2024-07-24 22:15:54.138113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.218 qpair failed and we were unable to recover it. 00:28:15.218 [2024-07-24 22:15:54.138276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.218 [2024-07-24 22:15:54.138289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.218 qpair failed and we were unable to recover it. 00:28:15.218 [2024-07-24 22:15:54.138472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.218 [2024-07-24 22:15:54.138485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.218 qpair failed and we were unable to recover it. 00:28:15.218 [2024-07-24 22:15:54.138703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.218 [2024-07-24 22:15:54.138724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.218 qpair failed and we were unable to recover it. 00:28:15.218 [2024-07-24 22:15:54.138967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.218 [2024-07-24 22:15:54.138980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.218 qpair failed and we were unable to recover it. 00:28:15.218 [2024-07-24 22:15:54.139196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.218 [2024-07-24 22:15:54.139210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.218 qpair failed and we were unable to recover it. 00:28:15.218 [2024-07-24 22:15:54.139446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.218 [2024-07-24 22:15:54.139459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.218 qpair failed and we were unable to recover it. 00:28:15.218 [2024-07-24 22:15:54.139677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.218 [2024-07-24 22:15:54.139691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.218 qpair failed and we were unable to recover it. 00:28:15.218 [2024-07-24 22:15:54.139859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.218 [2024-07-24 22:15:54.139873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.218 qpair failed and we were unable to recover it. 00:28:15.218 [2024-07-24 22:15:54.140183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.218 [2024-07-24 22:15:54.140197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.218 qpair failed and we were unable to recover it. 00:28:15.218 [2024-07-24 22:15:54.140371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.218 [2024-07-24 22:15:54.140385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.218 qpair failed and we were unable to recover it. 00:28:15.218 [2024-07-24 22:15:54.140695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.218 [2024-07-24 22:15:54.140709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.218 qpair failed and we were unable to recover it. 00:28:15.218 [2024-07-24 22:15:54.140952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.218 [2024-07-24 22:15:54.140965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.218 qpair failed and we were unable to recover it. 00:28:15.219 [2024-07-24 22:15:54.141126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.219 [2024-07-24 22:15:54.141139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.219 qpair failed and we were unable to recover it. 00:28:15.219 [2024-07-24 22:15:54.141295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.219 [2024-07-24 22:15:54.141309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.219 qpair failed and we were unable to recover it. 00:28:15.219 [2024-07-24 22:15:54.141501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.219 [2024-07-24 22:15:54.141514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.219 qpair failed and we were unable to recover it. 00:28:15.219 [2024-07-24 22:15:54.141749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.219 [2024-07-24 22:15:54.141763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.219 qpair failed and we were unable to recover it. 00:28:15.219 [2024-07-24 22:15:54.141998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.219 [2024-07-24 22:15:54.142013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.219 qpair failed and we were unable to recover it. 00:28:15.219 [2024-07-24 22:15:54.142169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.219 [2024-07-24 22:15:54.142182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.219 qpair failed and we were unable to recover it. 00:28:15.219 [2024-07-24 22:15:54.142416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.219 [2024-07-24 22:15:54.142429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.219 qpair failed and we were unable to recover it. 00:28:15.219 [2024-07-24 22:15:54.142667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.219 [2024-07-24 22:15:54.142681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.219 qpair failed and we were unable to recover it. 00:28:15.219 [2024-07-24 22:15:54.142847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.219 [2024-07-24 22:15:54.142861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.219 qpair failed and we were unable to recover it. 00:28:15.219 [2024-07-24 22:15:54.143097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.219 [2024-07-24 22:15:54.143110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.219 qpair failed and we were unable to recover it. 00:28:15.219 [2024-07-24 22:15:54.143262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.219 [2024-07-24 22:15:54.143275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.219 qpair failed and we were unable to recover it. 00:28:15.219 [2024-07-24 22:15:54.143505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.219 [2024-07-24 22:15:54.143519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.219 qpair failed and we were unable to recover it. 00:28:15.219 [2024-07-24 22:15:54.143736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.219 [2024-07-24 22:15:54.143750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.219 qpair failed and we were unable to recover it. 00:28:15.219 [2024-07-24 22:15:54.143989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.219 [2024-07-24 22:15:54.144002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.219 qpair failed and we were unable to recover it. 00:28:15.219 [2024-07-24 22:15:54.144170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.219 [2024-07-24 22:15:54.144184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.219 qpair failed and we were unable to recover it. 00:28:15.219 [2024-07-24 22:15:54.144490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.219 [2024-07-24 22:15:54.144504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.219 qpair failed and we were unable to recover it. 00:28:15.219 [2024-07-24 22:15:54.144792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.219 [2024-07-24 22:15:54.144805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.219 qpair failed and we were unable to recover it. 00:28:15.219 [2024-07-24 22:15:54.144984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.219 [2024-07-24 22:15:54.144997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.219 qpair failed and we were unable to recover it. 00:28:15.219 [2024-07-24 22:15:54.145228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.219 [2024-07-24 22:15:54.145242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.219 qpair failed and we were unable to recover it. 00:28:15.219 [2024-07-24 22:15:54.145502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.219 [2024-07-24 22:15:54.145516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.220 qpair failed and we were unable to recover it. 00:28:15.220 [2024-07-24 22:15:54.145852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.220 [2024-07-24 22:15:54.145866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.220 qpair failed and we were unable to recover it. 00:28:15.220 [2024-07-24 22:15:54.146179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.220 [2024-07-24 22:15:54.146193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.220 qpair failed and we were unable to recover it. 00:28:15.220 [2024-07-24 22:15:54.146429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.220 [2024-07-24 22:15:54.146443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.220 qpair failed and we were unable to recover it. 00:28:15.220 [2024-07-24 22:15:54.146727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.220 [2024-07-24 22:15:54.146740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.220 qpair failed and we were unable to recover it. 00:28:15.220 [2024-07-24 22:15:54.146904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.220 [2024-07-24 22:15:54.146918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.220 qpair failed and we were unable to recover it. 00:28:15.220 [2024-07-24 22:15:54.147136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.220 [2024-07-24 22:15:54.147150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.220 qpair failed and we were unable to recover it. 00:28:15.220 [2024-07-24 22:15:54.147238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.220 [2024-07-24 22:15:54.147250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.220 qpair failed and we were unable to recover it. 00:28:15.220 [2024-07-24 22:15:54.147555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.220 [2024-07-24 22:15:54.147569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.220 qpair failed and we were unable to recover it. 00:28:15.220 [2024-07-24 22:15:54.147822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.220 [2024-07-24 22:15:54.147836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.220 qpair failed and we were unable to recover it. 00:28:15.220 [2024-07-24 22:15:54.147999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.220 [2024-07-24 22:15:54.148012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.220 qpair failed and we were unable to recover it. 00:28:15.220 [2024-07-24 22:15:54.148175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.220 [2024-07-24 22:15:54.148188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.220 qpair failed and we were unable to recover it. 00:28:15.220 [2024-07-24 22:15:54.148478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.220 [2024-07-24 22:15:54.148492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.220 qpair failed and we were unable to recover it. 00:28:15.220 [2024-07-24 22:15:54.148709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.220 [2024-07-24 22:15:54.148732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.220 qpair failed and we were unable to recover it. 00:28:15.220 [2024-07-24 22:15:54.148918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.220 [2024-07-24 22:15:54.148931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.220 qpair failed and we were unable to recover it. 00:28:15.220 [2024-07-24 22:15:54.149177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.220 [2024-07-24 22:15:54.149191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.220 qpair failed and we were unable to recover it. 00:28:15.220 [2024-07-24 22:15:54.149407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.220 [2024-07-24 22:15:54.149420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.220 qpair failed and we were unable to recover it. 00:28:15.220 [2024-07-24 22:15:54.149597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.220 [2024-07-24 22:15:54.149611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.220 qpair failed and we were unable to recover it. 00:28:15.220 [2024-07-24 22:15:54.149770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.220 [2024-07-24 22:15:54.149783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.220 qpair failed and we were unable to recover it. 00:28:15.220 [2024-07-24 22:15:54.150107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.220 [2024-07-24 22:15:54.150120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.220 qpair failed and we were unable to recover it. 00:28:15.220 [2024-07-24 22:15:54.150335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.220 [2024-07-24 22:15:54.150348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.220 qpair failed and we were unable to recover it. 00:28:15.220 [2024-07-24 22:15:54.150586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.220 [2024-07-24 22:15:54.150600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.220 qpair failed and we were unable to recover it. 00:28:15.220 [2024-07-24 22:15:54.150844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.220 [2024-07-24 22:15:54.150858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.220 qpair failed and we were unable to recover it. 00:28:15.220 [2024-07-24 22:15:54.151092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.220 [2024-07-24 22:15:54.151105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.220 qpair failed and we were unable to recover it. 00:28:15.220 [2024-07-24 22:15:54.151358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.220 [2024-07-24 22:15:54.151372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.220 qpair failed and we were unable to recover it. 00:28:15.220 [2024-07-24 22:15:54.151594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.220 [2024-07-24 22:15:54.151610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.220 qpair failed and we were unable to recover it. 00:28:15.220 [2024-07-24 22:15:54.151896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.220 [2024-07-24 22:15:54.151910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.220 qpair failed and we were unable to recover it. 00:28:15.220 [2024-07-24 22:15:54.152145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.221 [2024-07-24 22:15:54.152158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.221 qpair failed and we were unable to recover it. 00:28:15.221 [2024-07-24 22:15:54.152385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.221 [2024-07-24 22:15:54.152399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.221 qpair failed and we were unable to recover it. 00:28:15.221 [2024-07-24 22:15:54.152628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.221 [2024-07-24 22:15:54.152641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.221 qpair failed and we were unable to recover it. 00:28:15.221 [2024-07-24 22:15:54.152951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.221 [2024-07-24 22:15:54.152966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.221 qpair failed and we were unable to recover it. 00:28:15.221 [2024-07-24 22:15:54.153257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.221 [2024-07-24 22:15:54.153270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.221 qpair failed and we were unable to recover it. 00:28:15.221 [2024-07-24 22:15:54.153485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.221 [2024-07-24 22:15:54.153499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.221 qpair failed and we were unable to recover it. 00:28:15.221 [2024-07-24 22:15:54.153731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.221 [2024-07-24 22:15:54.153745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.221 qpair failed and we were unable to recover it. 00:28:15.221 [2024-07-24 22:15:54.154030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.221 [2024-07-24 22:15:54.154044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.221 qpair failed and we were unable to recover it. 00:28:15.221 [2024-07-24 22:15:54.154258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.221 [2024-07-24 22:15:54.154271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.221 qpair failed and we were unable to recover it. 00:28:15.221 [2024-07-24 22:15:54.154441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.221 [2024-07-24 22:15:54.154456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.221 qpair failed and we were unable to recover it. 00:28:15.221 [2024-07-24 22:15:54.154693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.221 [2024-07-24 22:15:54.154706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.221 qpair failed and we were unable to recover it. 00:28:15.221 [2024-07-24 22:15:54.154959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.221 [2024-07-24 22:15:54.154973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.221 qpair failed and we were unable to recover it. 00:28:15.221 [2024-07-24 22:15:54.155134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.221 [2024-07-24 22:15:54.155148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.221 qpair failed and we were unable to recover it. 00:28:15.221 [2024-07-24 22:15:54.155477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.221 [2024-07-24 22:15:54.155490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.221 qpair failed and we were unable to recover it. 00:28:15.221 [2024-07-24 22:15:54.155667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.221 [2024-07-24 22:15:54.155682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.221 qpair failed and we were unable to recover it. 00:28:15.221 [2024-07-24 22:15:54.155913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.221 [2024-07-24 22:15:54.155926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.221 qpair failed and we were unable to recover it. 00:28:15.221 [2024-07-24 22:15:54.156183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.221 [2024-07-24 22:15:54.156197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.221 qpair failed and we were unable to recover it. 00:28:15.221 [2024-07-24 22:15:54.156311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.221 [2024-07-24 22:15:54.156325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.221 qpair failed and we were unable to recover it. 00:28:15.221 [2024-07-24 22:15:54.156644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.221 [2024-07-24 22:15:54.156658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.221 qpair failed and we were unable to recover it. 00:28:15.221 [2024-07-24 22:15:54.156957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.221 [2024-07-24 22:15:54.156970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.221 qpair failed and we were unable to recover it. 00:28:15.221 [2024-07-24 22:15:54.157201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.221 [2024-07-24 22:15:54.157214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.221 qpair failed and we were unable to recover it. 00:28:15.221 [2024-07-24 22:15:54.157467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.221 [2024-07-24 22:15:54.157481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.221 qpair failed and we were unable to recover it. 00:28:15.221 [2024-07-24 22:15:54.157768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.221 [2024-07-24 22:15:54.157782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.221 qpair failed and we were unable to recover it. 00:28:15.221 [2024-07-24 22:15:54.157998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.221 [2024-07-24 22:15:54.158012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.221 qpair failed and we were unable to recover it. 00:28:15.221 [2024-07-24 22:15:54.158162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.221 [2024-07-24 22:15:54.158176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.221 qpair failed and we were unable to recover it. 00:28:15.221 [2024-07-24 22:15:54.158339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.221 [2024-07-24 22:15:54.158353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.221 qpair failed and we were unable to recover it. 00:28:15.221 [2024-07-24 22:15:54.158529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.221 [2024-07-24 22:15:54.158542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.221 qpair failed and we were unable to recover it. 00:28:15.221 [2024-07-24 22:15:54.158850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.221 [2024-07-24 22:15:54.158864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.222 qpair failed and we were unable to recover it. 00:28:15.222 [2024-07-24 22:15:54.159046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.222 [2024-07-24 22:15:54.159059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.222 qpair failed and we were unable to recover it. 00:28:15.222 [2024-07-24 22:15:54.159282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.222 [2024-07-24 22:15:54.159295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.222 qpair failed and we were unable to recover it. 00:28:15.222 [2024-07-24 22:15:54.159533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.222 [2024-07-24 22:15:54.159546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.222 qpair failed and we were unable to recover it. 00:28:15.222 [2024-07-24 22:15:54.159719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.222 [2024-07-24 22:15:54.159733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.222 qpair failed and we were unable to recover it. 00:28:15.222 [2024-07-24 22:15:54.159972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.222 [2024-07-24 22:15:54.159985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.222 qpair failed and we were unable to recover it. 00:28:15.222 [2024-07-24 22:15:54.160270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.222 [2024-07-24 22:15:54.160283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.222 qpair failed and we were unable to recover it. 00:28:15.222 [2024-07-24 22:15:54.160616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.222 [2024-07-24 22:15:54.160629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.222 qpair failed and we were unable to recover it. 00:28:15.222 [2024-07-24 22:15:54.160886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.222 [2024-07-24 22:15:54.160899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.222 qpair failed and we were unable to recover it. 00:28:15.222 [2024-07-24 22:15:54.161183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.222 [2024-07-24 22:15:54.161196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.222 qpair failed and we were unable to recover it. 00:28:15.222 [2024-07-24 22:15:54.161456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.222 [2024-07-24 22:15:54.161469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.222 qpair failed and we were unable to recover it. 00:28:15.222 [2024-07-24 22:15:54.161631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.222 [2024-07-24 22:15:54.161647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.222 qpair failed and we were unable to recover it. 00:28:15.222 [2024-07-24 22:15:54.161952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.222 [2024-07-24 22:15:54.161966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.222 qpair failed and we were unable to recover it. 00:28:15.222 [2024-07-24 22:15:54.162252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.222 [2024-07-24 22:15:54.162266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.222 qpair failed and we were unable to recover it. 00:28:15.222 [2024-07-24 22:15:54.162521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.222 [2024-07-24 22:15:54.162534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.222 qpair failed and we were unable to recover it. 00:28:15.222 [2024-07-24 22:15:54.162818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.222 [2024-07-24 22:15:54.162832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.222 qpair failed and we were unable to recover it. 00:28:15.222 [2024-07-24 22:15:54.163069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.222 [2024-07-24 22:15:54.163083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.222 qpair failed and we were unable to recover it. 00:28:15.222 [2024-07-24 22:15:54.163316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.222 [2024-07-24 22:15:54.163329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.222 qpair failed and we were unable to recover it. 00:28:15.222 [2024-07-24 22:15:54.163569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.222 [2024-07-24 22:15:54.163582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.222 qpair failed and we were unable to recover it. 00:28:15.222 [2024-07-24 22:15:54.163815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.222 [2024-07-24 22:15:54.163828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.222 qpair failed and we were unable to recover it. 00:28:15.222 [2024-07-24 22:15:54.164071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.222 [2024-07-24 22:15:54.164084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.222 qpair failed and we were unable to recover it. 00:28:15.222 [2024-07-24 22:15:54.164251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.222 [2024-07-24 22:15:54.164264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.222 qpair failed and we were unable to recover it. 00:28:15.222 [2024-07-24 22:15:54.164478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.222 [2024-07-24 22:15:54.164492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.222 qpair failed and we were unable to recover it. 00:28:15.222 [2024-07-24 22:15:54.164700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.222 [2024-07-24 22:15:54.164717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.222 qpair failed and we were unable to recover it. 00:28:15.222 [2024-07-24 22:15:54.164865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.222 [2024-07-24 22:15:54.164878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.222 qpair failed and we were unable to recover it. 00:28:15.222 [2024-07-24 22:15:54.165165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.222 [2024-07-24 22:15:54.165178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.222 qpair failed and we were unable to recover it. 00:28:15.222 [2024-07-24 22:15:54.165466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.222 [2024-07-24 22:15:54.165479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.222 qpair failed and we were unable to recover it. 00:28:15.222 [2024-07-24 22:15:54.165702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.222 [2024-07-24 22:15:54.165719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.222 qpair failed and we were unable to recover it. 00:28:15.222 [2024-07-24 22:15:54.165956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.222 [2024-07-24 22:15:54.165969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.222 qpair failed and we were unable to recover it. 00:28:15.222 [2024-07-24 22:15:54.166203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.222 [2024-07-24 22:15:54.166216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.222 qpair failed and we were unable to recover it. 00:28:15.222 [2024-07-24 22:15:54.166446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.222 [2024-07-24 22:15:54.166459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.222 qpair failed and we were unable to recover it. 00:28:15.223 [2024-07-24 22:15:54.166705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.223 [2024-07-24 22:15:54.166724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.223 qpair failed and we were unable to recover it. 00:28:15.223 [2024-07-24 22:15:54.166965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.223 [2024-07-24 22:15:54.166978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.223 qpair failed and we were unable to recover it. 00:28:15.223 [2024-07-24 22:15:54.167157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.223 [2024-07-24 22:15:54.167170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.223 qpair failed and we were unable to recover it. 00:28:15.223 [2024-07-24 22:15:54.167513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.223 [2024-07-24 22:15:54.167527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.223 qpair failed and we were unable to recover it. 00:28:15.223 [2024-07-24 22:15:54.167785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.223 [2024-07-24 22:15:54.167799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.223 qpair failed and we were unable to recover it. 00:28:15.223 [2024-07-24 22:15:54.168019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.223 [2024-07-24 22:15:54.168032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.223 qpair failed and we were unable to recover it. 00:28:15.223 [2024-07-24 22:15:54.168362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.223 [2024-07-24 22:15:54.168375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.223 qpair failed and we were unable to recover it. 00:28:15.223 [2024-07-24 22:15:54.168660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.223 [2024-07-24 22:15:54.168674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.223 qpair failed and we were unable to recover it. 00:28:15.223 [2024-07-24 22:15:54.168908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.223 [2024-07-24 22:15:54.168921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.223 qpair failed and we were unable to recover it. 00:28:15.223 [2024-07-24 22:15:54.169084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.223 [2024-07-24 22:15:54.169098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.223 qpair failed and we were unable to recover it. 00:28:15.223 [2024-07-24 22:15:54.169281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.223 [2024-07-24 22:15:54.169294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.223 qpair failed and we were unable to recover it. 00:28:15.223 [2024-07-24 22:15:54.169527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.223 [2024-07-24 22:15:54.169540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.223 qpair failed and we were unable to recover it. 00:28:15.223 [2024-07-24 22:15:54.169769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.223 [2024-07-24 22:15:54.169783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.223 qpair failed and we were unable to recover it. 00:28:15.223 [2024-07-24 22:15:54.170025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.223 [2024-07-24 22:15:54.170039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.223 qpair failed and we were unable to recover it. 00:28:15.223 [2024-07-24 22:15:54.170279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.223 [2024-07-24 22:15:54.170293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.223 qpair failed and we were unable to recover it. 00:28:15.223 [2024-07-24 22:15:54.170580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.223 [2024-07-24 22:15:54.170593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.223 qpair failed and we were unable to recover it. 00:28:15.223 [2024-07-24 22:15:54.170851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.223 [2024-07-24 22:15:54.170865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.223 qpair failed and we were unable to recover it. 00:28:15.223 [2024-07-24 22:15:54.171097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.223 [2024-07-24 22:15:54.171110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.223 qpair failed and we were unable to recover it. 00:28:15.223 [2024-07-24 22:15:54.171393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.223 [2024-07-24 22:15:54.171406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.223 qpair failed and we were unable to recover it. 00:28:15.223 [2024-07-24 22:15:54.171659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.223 [2024-07-24 22:15:54.171672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.223 qpair failed and we were unable to recover it. 00:28:15.223 [2024-07-24 22:15:54.171909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.223 [2024-07-24 22:15:54.171925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.223 qpair failed and we were unable to recover it. 00:28:15.223 [2024-07-24 22:15:54.172143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.223 [2024-07-24 22:15:54.172157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.223 qpair failed and we were unable to recover it. 00:28:15.223 [2024-07-24 22:15:54.172482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.223 [2024-07-24 22:15:54.172495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.223 qpair failed and we were unable to recover it. 00:28:15.223 [2024-07-24 22:15:54.172794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.223 [2024-07-24 22:15:54.172807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.223 qpair failed and we were unable to recover it. 00:28:15.223 [2024-07-24 22:15:54.173113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.223 [2024-07-24 22:15:54.173127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.223 qpair failed and we were unable to recover it. 00:28:15.223 [2024-07-24 22:15:54.173436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.223 [2024-07-24 22:15:54.173450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.223 qpair failed and we were unable to recover it. 00:28:15.223 [2024-07-24 22:15:54.173684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.223 [2024-07-24 22:15:54.173697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.223 qpair failed and we were unable to recover it. 00:28:15.223 [2024-07-24 22:15:54.173934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.223 [2024-07-24 22:15:54.173947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.223 qpair failed and we were unable to recover it. 00:28:15.223 [2024-07-24 22:15:54.174175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.223 [2024-07-24 22:15:54.174189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.223 qpair failed and we were unable to recover it. 00:28:15.223 [2024-07-24 22:15:54.174480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.223 [2024-07-24 22:15:54.174494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.224 qpair failed and we were unable to recover it. 00:28:15.224 [2024-07-24 22:15:54.174723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.224 [2024-07-24 22:15:54.174737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.224 qpair failed and we were unable to recover it. 00:28:15.224 [2024-07-24 22:15:54.175022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.224 [2024-07-24 22:15:54.175035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.224 qpair failed and we were unable to recover it. 00:28:15.224 [2024-07-24 22:15:54.175265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.224 [2024-07-24 22:15:54.175279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.224 qpair failed and we were unable to recover it. 00:28:15.224 [2024-07-24 22:15:54.175504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.224 [2024-07-24 22:15:54.175518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.224 qpair failed and we were unable to recover it. 00:28:15.224 [2024-07-24 22:15:54.175847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.224 [2024-07-24 22:15:54.175861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.224 qpair failed and we were unable to recover it. 00:28:15.224 [2024-07-24 22:15:54.176063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.224 [2024-07-24 22:15:54.176077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.224 qpair failed and we were unable to recover it. 00:28:15.224 [2024-07-24 22:15:54.176311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.224 [2024-07-24 22:15:54.176324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.224 qpair failed and we were unable to recover it. 00:28:15.224 [2024-07-24 22:15:54.176614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.224 [2024-07-24 22:15:54.176628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.224 qpair failed and we were unable to recover it. 00:28:15.224 [2024-07-24 22:15:54.176845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.224 [2024-07-24 22:15:54.176857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.224 qpair failed and we were unable to recover it. 00:28:15.224 [2024-07-24 22:15:54.177092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.224 [2024-07-24 22:15:54.177106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.224 qpair failed and we were unable to recover it. 00:28:15.224 [2024-07-24 22:15:54.177361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.224 [2024-07-24 22:15:54.177374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.224 qpair failed and we were unable to recover it. 00:28:15.224 [2024-07-24 22:15:54.177633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.224 [2024-07-24 22:15:54.177646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.224 qpair failed and we were unable to recover it. 00:28:15.224 [2024-07-24 22:15:54.177804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.224 [2024-07-24 22:15:54.177818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.224 qpair failed and we were unable to recover it. 00:28:15.224 [2024-07-24 22:15:54.178066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.224 [2024-07-24 22:15:54.178079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.224 qpair failed and we were unable to recover it. 00:28:15.224 [2024-07-24 22:15:54.178313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.224 [2024-07-24 22:15:54.178326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.224 qpair failed and we were unable to recover it. 00:28:15.224 [2024-07-24 22:15:54.178635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.224 [2024-07-24 22:15:54.178648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.224 qpair failed and we were unable to recover it. 00:28:15.224 [2024-07-24 22:15:54.178947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.224 [2024-07-24 22:15:54.178961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.224 qpair failed and we were unable to recover it. 00:28:15.224 [2024-07-24 22:15:54.179200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.224 [2024-07-24 22:15:54.179214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.224 qpair failed and we were unable to recover it. 00:28:15.224 [2024-07-24 22:15:54.179442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.224 [2024-07-24 22:15:54.179455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.224 qpair failed and we were unable to recover it. 00:28:15.224 [2024-07-24 22:15:54.179687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.224 [2024-07-24 22:15:54.179700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.224 qpair failed and we were unable to recover it. 00:28:15.224 [2024-07-24 22:15:54.180020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.224 [2024-07-24 22:15:54.180033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.224 qpair failed and we were unable to recover it. 00:28:15.224 [2024-07-24 22:15:54.180266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.224 [2024-07-24 22:15:54.180280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.224 qpair failed and we were unable to recover it. 00:28:15.224 [2024-07-24 22:15:54.180585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.224 [2024-07-24 22:15:54.180599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.224 qpair failed and we were unable to recover it. 00:28:15.224 [2024-07-24 22:15:54.180912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.224 [2024-07-24 22:15:54.180926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.224 qpair failed and we were unable to recover it. 00:28:15.224 [2024-07-24 22:15:54.181143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.224 [2024-07-24 22:15:54.181157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.224 qpair failed and we were unable to recover it. 00:28:15.224 [2024-07-24 22:15:54.181415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.224 [2024-07-24 22:15:54.181428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.224 qpair failed and we were unable to recover it. 00:28:15.224 [2024-07-24 22:15:54.181740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.224 [2024-07-24 22:15:54.181754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.224 qpair failed and we were unable to recover it. 00:28:15.224 [2024-07-24 22:15:54.182026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.224 [2024-07-24 22:15:54.182040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.224 qpair failed and we were unable to recover it. 00:28:15.224 [2024-07-24 22:15:54.182335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.224 [2024-07-24 22:15:54.182348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.224 qpair failed and we were unable to recover it. 00:28:15.224 [2024-07-24 22:15:54.182587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.224 [2024-07-24 22:15:54.182600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.224 qpair failed and we were unable to recover it. 00:28:15.224 [2024-07-24 22:15:54.182830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.224 [2024-07-24 22:15:54.182846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.224 qpair failed and we were unable to recover it. 00:28:15.224 [2024-07-24 22:15:54.183095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.224 [2024-07-24 22:15:54.183108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.224 qpair failed and we were unable to recover it. 00:28:15.224 [2024-07-24 22:15:54.183340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.224 [2024-07-24 22:15:54.183353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.224 qpair failed and we were unable to recover it. 00:28:15.224 [2024-07-24 22:15:54.183595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.224 [2024-07-24 22:15:54.183608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.224 qpair failed and we were unable to recover it. 00:28:15.224 [2024-07-24 22:15:54.183925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.225 [2024-07-24 22:15:54.183939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.225 qpair failed and we were unable to recover it. 00:28:15.225 [2024-07-24 22:15:54.184176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.225 [2024-07-24 22:15:54.184190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.225 qpair failed and we were unable to recover it. 00:28:15.225 [2024-07-24 22:15:54.184437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.225 [2024-07-24 22:15:54.184450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.225 qpair failed and we were unable to recover it. 00:28:15.225 [2024-07-24 22:15:54.184686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.225 [2024-07-24 22:15:54.184700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.225 qpair failed and we were unable to recover it. 00:28:15.225 [2024-07-24 22:15:54.184921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.225 [2024-07-24 22:15:54.184935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.225 qpair failed and we were unable to recover it. 00:28:15.225 [2024-07-24 22:15:54.185222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.225 [2024-07-24 22:15:54.185235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.225 qpair failed and we were unable to recover it. 00:28:15.225 [2024-07-24 22:15:54.185464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.225 [2024-07-24 22:15:54.185478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.225 qpair failed and we were unable to recover it. 00:28:15.225 [2024-07-24 22:15:54.185646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.225 [2024-07-24 22:15:54.185659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.225 qpair failed and we were unable to recover it. 00:28:15.225 [2024-07-24 22:15:54.185965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.225 [2024-07-24 22:15:54.185979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.225 qpair failed and we were unable to recover it. 00:28:15.225 [2024-07-24 22:15:54.186279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.225 [2024-07-24 22:15:54.186293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.225 qpair failed and we were unable to recover it. 00:28:15.225 [2024-07-24 22:15:54.186557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.225 [2024-07-24 22:15:54.186570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.225 qpair failed and we were unable to recover it. 00:28:15.225 [2024-07-24 22:15:54.186729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.225 [2024-07-24 22:15:54.186742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.225 qpair failed and we were unable to recover it. 00:28:15.225 [2024-07-24 22:15:54.187010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.225 [2024-07-24 22:15:54.187024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.225 qpair failed and we were unable to recover it. 00:28:15.225 [2024-07-24 22:15:54.187240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.225 [2024-07-24 22:15:54.187253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.225 qpair failed and we were unable to recover it. 00:28:15.225 [2024-07-24 22:15:54.187503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.225 [2024-07-24 22:15:54.187516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.225 qpair failed and we were unable to recover it. 00:28:15.225 [2024-07-24 22:15:54.187799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.225 [2024-07-24 22:15:54.187812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.225 qpair failed and we were unable to recover it. 00:28:15.225 [2024-07-24 22:15:54.187987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.225 [2024-07-24 22:15:54.188000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.225 qpair failed and we were unable to recover it. 00:28:15.225 [2024-07-24 22:15:54.188255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.225 [2024-07-24 22:15:54.188268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.225 qpair failed and we were unable to recover it. 00:28:15.225 [2024-07-24 22:15:54.188574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.225 [2024-07-24 22:15:54.188587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.225 qpair failed and we were unable to recover it. 00:28:15.225 [2024-07-24 22:15:54.188914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.225 [2024-07-24 22:15:54.188928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.225 qpair failed and we were unable to recover it. 00:28:15.225 [2024-07-24 22:15:54.189233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.225 [2024-07-24 22:15:54.189247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.225 qpair failed and we were unable to recover it. 00:28:15.225 [2024-07-24 22:15:54.189555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.225 [2024-07-24 22:15:54.189568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.225 qpair failed and we were unable to recover it. 00:28:15.225 [2024-07-24 22:15:54.189767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.225 [2024-07-24 22:15:54.189780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.225 qpair failed and we were unable to recover it. 00:28:15.225 [2024-07-24 22:15:54.190002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.225 [2024-07-24 22:15:54.190016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.225 qpair failed and we were unable to recover it. 00:28:15.225 [2024-07-24 22:15:54.190230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.225 [2024-07-24 22:15:54.190243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.225 qpair failed and we were unable to recover it. 00:28:15.225 [2024-07-24 22:15:54.190591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.225 [2024-07-24 22:15:54.190605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.225 qpair failed and we were unable to recover it. 00:28:15.225 [2024-07-24 22:15:54.190827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.225 [2024-07-24 22:15:54.190840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.225 qpair failed and we were unable to recover it. 00:28:15.225 [2024-07-24 22:15:54.191102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.225 [2024-07-24 22:15:54.191116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.225 qpair failed and we were unable to recover it. 00:28:15.225 [2024-07-24 22:15:54.191411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.225 [2024-07-24 22:15:54.191424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.225 qpair failed and we were unable to recover it. 00:28:15.225 [2024-07-24 22:15:54.191646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.225 [2024-07-24 22:15:54.191660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.225 qpair failed and we were unable to recover it. 00:28:15.225 [2024-07-24 22:15:54.191924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.225 [2024-07-24 22:15:54.191937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.225 qpair failed and we were unable to recover it. 00:28:15.226 [2024-07-24 22:15:54.192177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.226 [2024-07-24 22:15:54.192190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.226 qpair failed and we were unable to recover it. 00:28:15.226 [2024-07-24 22:15:54.192375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.226 [2024-07-24 22:15:54.192389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.226 qpair failed and we were unable to recover it. 00:28:15.226 [2024-07-24 22:15:54.192621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.226 [2024-07-24 22:15:54.192635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.226 qpair failed and we were unable to recover it. 00:28:15.226 [2024-07-24 22:15:54.192965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.226 [2024-07-24 22:15:54.192979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.226 qpair failed and we were unable to recover it. 00:28:15.226 [2024-07-24 22:15:54.193217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.226 [2024-07-24 22:15:54.193231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.226 qpair failed and we were unable to recover it. 00:28:15.226 [2024-07-24 22:15:54.193533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.226 [2024-07-24 22:15:54.193548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.226 qpair failed and we were unable to recover it. 00:28:15.226 [2024-07-24 22:15:54.193839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.226 [2024-07-24 22:15:54.193852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.226 qpair failed and we were unable to recover it. 00:28:15.226 22:15:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:15.226 [2024-07-24 22:15:54.194141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.226 [2024-07-24 22:15:54.194157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.226 qpair failed and we were unable to recover it. 00:28:15.226 22:15:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:28:15.226 [2024-07-24 22:15:54.194459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.226 [2024-07-24 22:15:54.194473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.226 qpair failed and we were unable to recover it. 00:28:15.226 [2024-07-24 22:15:54.194692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.226 [2024-07-24 22:15:54.194707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.226 qpair failed and we were unable to recover it. 00:28:15.226 22:15:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:15.226 [2024-07-24 22:15:54.195023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.226 [2024-07-24 22:15:54.195039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.226 qpair failed and we were unable to recover it. 00:28:15.226 22:15:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:15.226 [2024-07-24 22:15:54.195284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.226 [2024-07-24 22:15:54.195299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.226 qpair failed and we were unable to recover it. 00:28:15.226 22:15:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:15.226 [2024-07-24 22:15:54.195539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.226 [2024-07-24 22:15:54.195553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.226 qpair failed and we were unable to recover it. 00:28:15.226 [2024-07-24 22:15:54.195778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.226 [2024-07-24 22:15:54.195792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.226 qpair failed and we were unable to recover it. 00:28:15.226 [2024-07-24 22:15:54.196032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.226 [2024-07-24 22:15:54.196045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.226 qpair failed and we were unable to recover it. 00:28:15.226 [2024-07-24 22:15:54.196379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.226 [2024-07-24 22:15:54.196393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.226 qpair failed and we were unable to recover it. 00:28:15.226 [2024-07-24 22:15:54.196630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.226 [2024-07-24 22:15:54.196644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.226 qpair failed and we were unable to recover it. 00:28:15.226 [2024-07-24 22:15:54.196978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.226 [2024-07-24 22:15:54.196992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.226 qpair failed and we were unable to recover it. 00:28:15.226 [2024-07-24 22:15:54.197159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.226 [2024-07-24 22:15:54.197172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.226 qpair failed and we were unable to recover it. 00:28:15.226 [2024-07-24 22:15:54.197395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.226 [2024-07-24 22:15:54.197409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.226 qpair failed and we were unable to recover it. 00:28:15.226 [2024-07-24 22:15:54.197693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.226 [2024-07-24 22:15:54.197707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.226 qpair failed and we were unable to recover it. 00:28:15.226 [2024-07-24 22:15:54.197962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.226 [2024-07-24 22:15:54.197975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.226 qpair failed and we were unable to recover it. 00:28:15.226 [2024-07-24 22:15:54.198191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.226 [2024-07-24 22:15:54.198204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.226 qpair failed and we were unable to recover it. 00:28:15.226 [2024-07-24 22:15:54.198516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.226 [2024-07-24 22:15:54.198530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.226 qpair failed and we were unable to recover it. 00:28:15.226 [2024-07-24 22:15:54.198834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.226 [2024-07-24 22:15:54.198848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.226 qpair failed and we were unable to recover it. 00:28:15.226 [2024-07-24 22:15:54.199006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.226 [2024-07-24 22:15:54.199019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.226 qpair failed and we were unable to recover it. 00:28:15.226 [2024-07-24 22:15:54.199320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.226 [2024-07-24 22:15:54.199334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.226 qpair failed and we were unable to recover it. 00:28:15.226 [2024-07-24 22:15:54.199572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.226 [2024-07-24 22:15:54.199586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.226 qpair failed and we were unable to recover it. 00:28:15.226 [2024-07-24 22:15:54.199900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.226 [2024-07-24 22:15:54.199914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.226 qpair failed and we were unable to recover it. 00:28:15.226 [2024-07-24 22:15:54.200221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.226 [2024-07-24 22:15:54.200235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.226 qpair failed and we were unable to recover it. 00:28:15.226 [2024-07-24 22:15:54.200575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.226 [2024-07-24 22:15:54.200589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.226 qpair failed and we were unable to recover it. 00:28:15.226 [2024-07-24 22:15:54.200790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.226 [2024-07-24 22:15:54.200804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.226 qpair failed and we were unable to recover it. 00:28:15.226 [2024-07-24 22:15:54.200963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.226 [2024-07-24 22:15:54.200977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.226 qpair failed and we were unable to recover it. 00:28:15.226 [2024-07-24 22:15:54.201301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.226 [2024-07-24 22:15:54.201315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.226 qpair failed and we were unable to recover it. 00:28:15.226 [2024-07-24 22:15:54.201571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.226 [2024-07-24 22:15:54.201584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.226 qpair failed and we were unable to recover it. 00:28:15.226 [2024-07-24 22:15:54.201879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.227 [2024-07-24 22:15:54.201892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.227 qpair failed and we were unable to recover it. 00:28:15.227 [2024-07-24 22:15:54.202143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.227 [2024-07-24 22:15:54.202157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.227 qpair failed and we were unable to recover it. 00:28:15.227 [2024-07-24 22:15:54.202412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.227 [2024-07-24 22:15:54.202425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.227 qpair failed and we were unable to recover it. 00:28:15.227 [2024-07-24 22:15:54.202712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.227 [2024-07-24 22:15:54.202730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.227 qpair failed and we were unable to recover it. 00:28:15.227 [2024-07-24 22:15:54.203047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.227 [2024-07-24 22:15:54.203061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.227 qpair failed and we were unable to recover it. 00:28:15.227 [2024-07-24 22:15:54.203235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.227 [2024-07-24 22:15:54.203249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.227 qpair failed and we were unable to recover it. 00:28:15.227 [2024-07-24 22:15:54.203559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.227 [2024-07-24 22:15:54.203572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.227 qpair failed and we were unable to recover it. 00:28:15.227 [2024-07-24 22:15:54.203902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.227 [2024-07-24 22:15:54.203916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.227 qpair failed and we were unable to recover it. 00:28:15.227 [2024-07-24 22:15:54.204226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.227 [2024-07-24 22:15:54.204242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.227 qpair failed and we were unable to recover it. 00:28:15.227 [2024-07-24 22:15:54.204396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.227 [2024-07-24 22:15:54.204409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.227 qpair failed and we were unable to recover it. 00:28:15.227 [2024-07-24 22:15:54.204584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.227 [2024-07-24 22:15:54.204597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.227 qpair failed and we were unable to recover it. 00:28:15.227 [2024-07-24 22:15:54.204883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.227 [2024-07-24 22:15:54.204897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.227 qpair failed and we were unable to recover it. 00:28:15.227 [2024-07-24 22:15:54.205065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.227 [2024-07-24 22:15:54.205079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.227 qpair failed and we were unable to recover it. 00:28:15.227 [2024-07-24 22:15:54.205376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.227 [2024-07-24 22:15:54.205389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.227 qpair failed and we were unable to recover it. 00:28:15.227 [2024-07-24 22:15:54.205670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.227 [2024-07-24 22:15:54.205683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.227 qpair failed and we were unable to recover it. 00:28:15.227 [2024-07-24 22:15:54.205991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.227 [2024-07-24 22:15:54.206004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.227 qpair failed and we were unable to recover it. 00:28:15.227 [2024-07-24 22:15:54.206238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.227 [2024-07-24 22:15:54.206251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.227 qpair failed and we were unable to recover it. 00:28:15.227 [2024-07-24 22:15:54.206433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.227 [2024-07-24 22:15:54.206446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.227 qpair failed and we were unable to recover it. 00:28:15.227 [2024-07-24 22:15:54.206606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.227 [2024-07-24 22:15:54.206619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.227 qpair failed and we were unable to recover it. 00:28:15.227 [2024-07-24 22:15:54.206792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.227 [2024-07-24 22:15:54.206806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.227 qpair failed and we were unable to recover it. 00:28:15.227 [2024-07-24 22:15:54.207067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.227 [2024-07-24 22:15:54.207081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.227 qpair failed and we were unable to recover it. 00:28:15.227 [2024-07-24 22:15:54.207314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.227 [2024-07-24 22:15:54.207327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.227 qpair failed and we were unable to recover it. 00:28:15.227 [2024-07-24 22:15:54.207533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.227 [2024-07-24 22:15:54.207547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.227 qpair failed and we were unable to recover it. 00:28:15.227 [2024-07-24 22:15:54.207787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.227 [2024-07-24 22:15:54.207801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.227 qpair failed and we were unable to recover it. 00:28:15.227 [2024-07-24 22:15:54.208022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.227 [2024-07-24 22:15:54.208035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.227 qpair failed and we were unable to recover it. 00:28:15.227 [2024-07-24 22:15:54.208274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.227 [2024-07-24 22:15:54.208288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.227 qpair failed and we were unable to recover it. 00:28:15.227 [2024-07-24 22:15:54.208538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.227 [2024-07-24 22:15:54.208552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.227 qpair failed and we were unable to recover it. 00:28:15.227 [2024-07-24 22:15:54.208793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.227 [2024-07-24 22:15:54.208807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.227 qpair failed and we were unable to recover it. 00:28:15.227 [2024-07-24 22:15:54.208980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.227 [2024-07-24 22:15:54.208995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.227 qpair failed and we were unable to recover it. 00:28:15.227 [2024-07-24 22:15:54.209272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.227 [2024-07-24 22:15:54.209285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.227 qpair failed and we were unable to recover it. 00:28:15.227 [2024-07-24 22:15:54.209436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.227 [2024-07-24 22:15:54.209449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.227 qpair failed and we were unable to recover it. 00:28:15.227 [2024-07-24 22:15:54.209666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.227 [2024-07-24 22:15:54.209680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.227 qpair failed and we were unable to recover it. 00:28:15.227 [2024-07-24 22:15:54.209863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.227 [2024-07-24 22:15:54.209877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.227 qpair failed and we were unable to recover it. 00:28:15.227 [2024-07-24 22:15:54.210106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.227 [2024-07-24 22:15:54.210120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.227 qpair failed and we were unable to recover it. 00:28:15.227 [2024-07-24 22:15:54.210428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.227 [2024-07-24 22:15:54.210441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.227 qpair failed and we were unable to recover it. 00:28:15.227 [2024-07-24 22:15:54.210610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.227 [2024-07-24 22:15:54.210623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.227 qpair failed and we were unable to recover it. 00:28:15.227 [2024-07-24 22:15:54.210853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.227 [2024-07-24 22:15:54.210867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.227 qpair failed and we were unable to recover it. 00:28:15.227 [2024-07-24 22:15:54.211041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.227 [2024-07-24 22:15:54.211055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.227 qpair failed and we were unable to recover it. 00:28:15.228 [2024-07-24 22:15:54.211228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.228 [2024-07-24 22:15:54.211240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.228 qpair failed and we were unable to recover it. 00:28:15.228 [2024-07-24 22:15:54.211509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.228 [2024-07-24 22:15:54.211522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.228 qpair failed and we were unable to recover it. 00:28:15.228 [2024-07-24 22:15:54.211789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.228 [2024-07-24 22:15:54.211802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.228 qpair failed and we were unable to recover it. 00:28:15.228 [2024-07-24 22:15:54.212029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.228 [2024-07-24 22:15:54.212043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.228 qpair failed and we were unable to recover it. 00:28:15.228 [2024-07-24 22:15:54.212204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.228 [2024-07-24 22:15:54.212218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.228 qpair failed and we were unable to recover it. 00:28:15.228 [2024-07-24 22:15:54.212401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.228 [2024-07-24 22:15:54.212414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.228 qpair failed and we were unable to recover it. 00:28:15.228 [2024-07-24 22:15:54.212631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.228 [2024-07-24 22:15:54.212645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.228 qpair failed and we were unable to recover it. 00:28:15.228 [2024-07-24 22:15:54.212874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.228 [2024-07-24 22:15:54.212889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.228 qpair failed and we were unable to recover it. 00:28:15.228 [2024-07-24 22:15:54.213072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.228 [2024-07-24 22:15:54.213086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.228 qpair failed and we were unable to recover it. 00:28:15.228 [2024-07-24 22:15:54.213245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.228 [2024-07-24 22:15:54.213258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.228 qpair failed and we were unable to recover it. 00:28:15.228 [2024-07-24 22:15:54.213476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.228 [2024-07-24 22:15:54.213492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.228 qpair failed and we were unable to recover it. 00:28:15.228 [2024-07-24 22:15:54.213724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.228 [2024-07-24 22:15:54.213737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.228 qpair failed and we were unable to recover it. 00:28:15.228 [2024-07-24 22:15:54.213979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.228 [2024-07-24 22:15:54.213993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.228 qpair failed and we were unable to recover it. 00:28:15.228 [2024-07-24 22:15:54.214148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.228 [2024-07-24 22:15:54.214160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.228 qpair failed and we were unable to recover it. 00:28:15.228 [2024-07-24 22:15:54.214320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.228 [2024-07-24 22:15:54.214332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.228 qpair failed and we were unable to recover it. 00:28:15.228 [2024-07-24 22:15:54.214559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.228 [2024-07-24 22:15:54.214573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.228 qpair failed and we were unable to recover it. 00:28:15.228 [2024-07-24 22:15:54.214742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.228 [2024-07-24 22:15:54.214756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.228 qpair failed and we were unable to recover it. 00:28:15.228 [2024-07-24 22:15:54.215079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.228 [2024-07-24 22:15:54.215093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.228 qpair failed and we were unable to recover it. 00:28:15.228 [2024-07-24 22:15:54.215261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.228 [2024-07-24 22:15:54.215275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.228 qpair failed and we were unable to recover it. 00:28:15.228 [2024-07-24 22:15:54.215583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.228 [2024-07-24 22:15:54.215597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.228 qpair failed and we were unable to recover it. 00:28:15.228 [2024-07-24 22:15:54.215837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.228 [2024-07-24 22:15:54.215851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.228 qpair failed and we were unable to recover it. 00:28:15.228 [2024-07-24 22:15:54.216157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.228 [2024-07-24 22:15:54.216171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.228 qpair failed and we were unable to recover it. 00:28:15.228 [2024-07-24 22:15:54.216410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.228 [2024-07-24 22:15:54.216424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.228 qpair failed and we were unable to recover it. 00:28:15.228 [2024-07-24 22:15:54.216661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.228 [2024-07-24 22:15:54.216675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.228 qpair failed and we were unable to recover it. 00:28:15.228 [2024-07-24 22:15:54.216913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.228 [2024-07-24 22:15:54.216927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.228 qpair failed and we were unable to recover it. 00:28:15.228 [2024-07-24 22:15:54.217164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.228 [2024-07-24 22:15:54.217178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.228 qpair failed and we were unable to recover it. 00:28:15.228 [2024-07-24 22:15:54.217511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.228 [2024-07-24 22:15:54.217524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.228 qpair failed and we were unable to recover it. 00:28:15.228 [2024-07-24 22:15:54.217813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.228 [2024-07-24 22:15:54.217827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.228 qpair failed and we were unable to recover it. 00:28:15.228 [2024-07-24 22:15:54.218057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.228 [2024-07-24 22:15:54.218071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.228 qpair failed and we were unable to recover it. 00:28:15.228 [2024-07-24 22:15:54.218251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.228 [2024-07-24 22:15:54.218264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.228 qpair failed and we were unable to recover it. 00:28:15.228 [2024-07-24 22:15:54.218641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.228 [2024-07-24 22:15:54.218655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.228 qpair failed and we were unable to recover it. 00:28:15.228 [2024-07-24 22:15:54.218898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.228 [2024-07-24 22:15:54.218912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.228 qpair failed and we were unable to recover it. 00:28:15.228 [2024-07-24 22:15:54.219068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.228 [2024-07-24 22:15:54.219083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.228 qpair failed and we were unable to recover it. 00:28:15.228 [2024-07-24 22:15:54.219369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.228 [2024-07-24 22:15:54.219383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.228 qpair failed and we were unable to recover it. 00:28:15.229 [2024-07-24 22:15:54.219583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.229 [2024-07-24 22:15:54.219595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.229 qpair failed and we were unable to recover it. 00:28:15.229 [2024-07-24 22:15:54.219769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.229 [2024-07-24 22:15:54.219782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.229 qpair failed and we were unable to recover it. 00:28:15.229 [2024-07-24 22:15:54.219966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.229 [2024-07-24 22:15:54.219980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.229 qpair failed and we were unable to recover it. 00:28:15.229 [2024-07-24 22:15:54.220145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.229 [2024-07-24 22:15:54.220159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.229 qpair failed and we were unable to recover it. 00:28:15.229 [2024-07-24 22:15:54.220352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.229 [2024-07-24 22:15:54.220366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.229 qpair failed and we were unable to recover it. 00:28:15.229 [2024-07-24 22:15:54.220544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.229 [2024-07-24 22:15:54.220558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.229 qpair failed and we were unable to recover it. 00:28:15.229 [2024-07-24 22:15:54.220767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.229 [2024-07-24 22:15:54.220781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.229 qpair failed and we were unable to recover it. 00:28:15.229 [2024-07-24 22:15:54.221078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.229 [2024-07-24 22:15:54.221091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.229 qpair failed and we were unable to recover it. 00:28:15.229 [2024-07-24 22:15:54.221377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.229 [2024-07-24 22:15:54.221391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.229 qpair failed and we were unable to recover it. 00:28:15.229 [2024-07-24 22:15:54.221625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.229 [2024-07-24 22:15:54.221638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.229 qpair failed and we were unable to recover it. 00:28:15.229 [2024-07-24 22:15:54.221856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.229 [2024-07-24 22:15:54.221870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.229 qpair failed and we were unable to recover it. 00:28:15.229 [2024-07-24 22:15:54.222025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.229 [2024-07-24 22:15:54.222039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.229 qpair failed and we were unable to recover it. 00:28:15.229 [2024-07-24 22:15:54.222218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.229 [2024-07-24 22:15:54.222232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.229 qpair failed and we were unable to recover it. 00:28:15.229 [2024-07-24 22:15:54.222582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.229 [2024-07-24 22:15:54.222596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.229 qpair failed and we were unable to recover it. 00:28:15.229 [2024-07-24 22:15:54.222944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.229 [2024-07-24 22:15:54.222958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.229 qpair failed and we were unable to recover it. 00:28:15.229 [2024-07-24 22:15:54.223184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.229 [2024-07-24 22:15:54.223198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.229 qpair failed and we were unable to recover it. 00:28:15.229 [2024-07-24 22:15:54.223465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.229 [2024-07-24 22:15:54.223480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.229 qpair failed and we were unable to recover it. 00:28:15.229 [2024-07-24 22:15:54.223711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.229 [2024-07-24 22:15:54.223730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.229 qpair failed and we were unable to recover it. 00:28:15.229 [2024-07-24 22:15:54.223966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.229 [2024-07-24 22:15:54.223980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.229 qpair failed and we were unable to recover it. 00:28:15.229 [2024-07-24 22:15:54.224161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.229 [2024-07-24 22:15:54.224174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.229 qpair failed and we were unable to recover it. 00:28:15.229 [2024-07-24 22:15:54.224433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.229 [2024-07-24 22:15:54.224447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.229 qpair failed and we were unable to recover it. 00:28:15.229 [2024-07-24 22:15:54.224804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.229 [2024-07-24 22:15:54.224820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.229 qpair failed and we were unable to recover it. 00:28:15.229 [2024-07-24 22:15:54.224991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.229 [2024-07-24 22:15:54.225005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.229 qpair failed and we were unable to recover it. 00:28:15.229 [2024-07-24 22:15:54.225342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.229 [2024-07-24 22:15:54.225356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.229 qpair failed and we were unable to recover it. 00:28:15.229 [2024-07-24 22:15:54.225658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.229 [2024-07-24 22:15:54.225671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.229 qpair failed and we were unable to recover it. 00:28:15.229 [2024-07-24 22:15:54.225872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.229 [2024-07-24 22:15:54.225885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.229 qpair failed and we were unable to recover it. 00:28:15.229 [2024-07-24 22:15:54.226121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.229 [2024-07-24 22:15:54.226137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.229 qpair failed and we were unable to recover it. 00:28:15.229 [2024-07-24 22:15:54.226305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.229 [2024-07-24 22:15:54.226319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.229 qpair failed and we were unable to recover it. 00:28:15.229 [2024-07-24 22:15:54.226557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.229 [2024-07-24 22:15:54.226571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.229 qpair failed and we were unable to recover it. 00:28:15.229 [2024-07-24 22:15:54.226832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.229 [2024-07-24 22:15:54.226845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.229 qpair failed and we were unable to recover it. 00:28:15.229 [2024-07-24 22:15:54.227027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.229 [2024-07-24 22:15:54.227040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.229 qpair failed and we were unable to recover it. 00:28:15.229 [2024-07-24 22:15:54.227283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.229 [2024-07-24 22:15:54.227297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.229 qpair failed and we were unable to recover it. 00:28:15.229 [2024-07-24 22:15:54.227487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.229 [2024-07-24 22:15:54.227500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.229 qpair failed and we were unable to recover it. 00:28:15.229 [2024-07-24 22:15:54.227725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.229 [2024-07-24 22:15:54.227738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.229 qpair failed and we were unable to recover it. 00:28:15.229 [2024-07-24 22:15:54.227978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.229 [2024-07-24 22:15:54.227992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.229 qpair failed and we were unable to recover it. 00:28:15.230 [2024-07-24 22:15:54.228178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.230 [2024-07-24 22:15:54.228194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.230 qpair failed and we were unable to recover it. 00:28:15.230 [2024-07-24 22:15:54.228442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.230 [2024-07-24 22:15:54.228455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.230 qpair failed and we were unable to recover it. 00:28:15.230 [2024-07-24 22:15:54.228782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.230 [2024-07-24 22:15:54.228796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.230 qpair failed and we were unable to recover it. 00:28:15.230 [2024-07-24 22:15:54.229088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.230 [2024-07-24 22:15:54.229102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.230 qpair failed and we were unable to recover it. 00:28:15.230 [2024-07-24 22:15:54.229331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.230 [2024-07-24 22:15:54.229344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.230 qpair failed and we were unable to recover it. 00:28:15.230 [2024-07-24 22:15:54.229647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.230 [2024-07-24 22:15:54.229660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.230 qpair failed and we were unable to recover it. 00:28:15.230 [2024-07-24 22:15:54.229892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.230 [2024-07-24 22:15:54.229907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.230 qpair failed and we were unable to recover it. 00:28:15.230 [2024-07-24 22:15:54.230143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.230 [2024-07-24 22:15:54.230158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.230 qpair failed and we were unable to recover it. 00:28:15.230 [2024-07-24 22:15:54.230386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.230 [2024-07-24 22:15:54.230400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.230 qpair failed and we were unable to recover it. 00:28:15.230 [2024-07-24 22:15:54.230558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.230 [2024-07-24 22:15:54.230571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.230 qpair failed and we were unable to recover it. 00:28:15.230 [2024-07-24 22:15:54.230812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.230 [2024-07-24 22:15:54.230826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.230 qpair failed and we were unable to recover it. 00:28:15.230 [2024-07-24 22:15:54.231092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.230 [2024-07-24 22:15:54.231106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.230 qpair failed and we were unable to recover it. 00:28:15.230 [2024-07-24 22:15:54.231391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.230 [2024-07-24 22:15:54.231404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.230 qpair failed and we were unable to recover it. 00:28:15.230 [2024-07-24 22:15:54.231622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.230 [2024-07-24 22:15:54.231636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.230 qpair failed and we were unable to recover it. 00:28:15.230 [2024-07-24 22:15:54.231863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.230 [2024-07-24 22:15:54.231877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.230 qpair failed and we were unable to recover it. 00:28:15.230 [2024-07-24 22:15:54.232107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.230 [2024-07-24 22:15:54.232121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.230 qpair failed and we were unable to recover it. 00:28:15.230 [2024-07-24 22:15:54.232335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.230 [2024-07-24 22:15:54.232348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.230 qpair failed and we were unable to recover it. 00:28:15.230 [2024-07-24 22:15:54.232520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.230 [2024-07-24 22:15:54.232534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.230 qpair failed and we were unable to recover it. 00:28:15.230 [2024-07-24 22:15:54.232818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.230 [2024-07-24 22:15:54.232832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.230 qpair failed and we were unable to recover it. 00:28:15.230 [2024-07-24 22:15:54.233070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.230 [2024-07-24 22:15:54.233084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.230 qpair failed and we were unable to recover it. 00:28:15.230 [2024-07-24 22:15:54.233344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.230 [2024-07-24 22:15:54.233358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.230 qpair failed and we were unable to recover it. 00:28:15.230 [2024-07-24 22:15:54.233566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.230 [2024-07-24 22:15:54.233582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.230 qpair failed and we were unable to recover it. 00:28:15.230 [2024-07-24 22:15:54.233801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.230 [2024-07-24 22:15:54.233815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.230 qpair failed and we were unable to recover it. 00:28:15.230 [2024-07-24 22:15:54.234022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.230 [2024-07-24 22:15:54.234035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.230 qpair failed and we were unable to recover it. 00:28:15.230 [2024-07-24 22:15:54.234228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.230 [2024-07-24 22:15:54.234241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.230 qpair failed and we were unable to recover it. 00:28:15.230 [2024-07-24 22:15:54.234547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.230 [2024-07-24 22:15:54.234563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.230 qpair failed and we were unable to recover it. 00:28:15.230 [2024-07-24 22:15:54.234818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.230 [2024-07-24 22:15:54.234832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.230 qpair failed and we were unable to recover it. 00:28:15.230 [2024-07-24 22:15:54.235079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.230 [2024-07-24 22:15:54.235092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.230 qpair failed and we were unable to recover it. 00:28:15.230 [2024-07-24 22:15:54.235269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.230 [2024-07-24 22:15:54.235283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.230 qpair failed and we were unable to recover it. 00:28:15.230 [2024-07-24 22:15:54.235622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.231 [2024-07-24 22:15:54.235635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.231 qpair failed and we were unable to recover it. 00:28:15.231 [2024-07-24 22:15:54.235897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.231 [2024-07-24 22:15:54.235910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.231 qpair failed and we were unable to recover it. 00:28:15.231 [2024-07-24 22:15:54.236099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.231 [2024-07-24 22:15:54.236113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.231 qpair failed and we were unable to recover it. 00:28:15.231 [2024-07-24 22:15:54.236377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.231 [2024-07-24 22:15:54.236392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.231 qpair failed and we were unable to recover it. 00:28:15.231 [2024-07-24 22:15:54.236564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.231 [2024-07-24 22:15:54.236578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.231 qpair failed and we were unable to recover it. 00:28:15.231 [2024-07-24 22:15:54.236884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.231 [2024-07-24 22:15:54.236898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.231 qpair failed and we were unable to recover it. 00:28:15.231 [2024-07-24 22:15:54.237088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.231 [2024-07-24 22:15:54.237102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.231 qpair failed and we were unable to recover it. 00:28:15.231 [2024-07-24 22:15:54.237435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.231 [2024-07-24 22:15:54.237448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.231 qpair failed and we were unable to recover it. 00:28:15.231 [2024-07-24 22:15:54.237687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.231 [2024-07-24 22:15:54.237701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.231 qpair failed and we were unable to recover it. 00:28:15.231 [2024-07-24 22:15:54.237954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.231 [2024-07-24 22:15:54.237968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.231 qpair failed and we were unable to recover it. 00:28:15.231 [2024-07-24 22:15:54.238140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.231 [2024-07-24 22:15:54.238154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.231 qpair failed and we were unable to recover it. 00:28:15.231 [2024-07-24 22:15:54.238355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.231 [2024-07-24 22:15:54.238369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.231 qpair failed and we were unable to recover it. 00:28:15.231 [2024-07-24 22:15:54.238617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.231 [2024-07-24 22:15:54.238631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.231 qpair failed and we were unable to recover it. 00:28:15.231 [2024-07-24 22:15:54.238870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.231 [2024-07-24 22:15:54.238884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.231 qpair failed and we were unable to recover it. 00:28:15.231 [2024-07-24 22:15:54.239126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.231 [2024-07-24 22:15:54.239139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.231 qpair failed and we were unable to recover it. 00:28:15.231 22:15:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:15.231 [2024-07-24 22:15:54.239379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.231 [2024-07-24 22:15:54.239394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.231 qpair failed and we were unable to recover it. 00:28:15.231 [2024-07-24 22:15:54.239632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.231 [2024-07-24 22:15:54.239646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.231 qpair failed and we were unable to recover it. 00:28:15.231 22:15:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:15.231 [2024-07-24 22:15:54.239905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.231 [2024-07-24 22:15:54.239920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.231 qpair failed and we were unable to recover it. 00:28:15.231 22:15:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.231 [2024-07-24 22:15:54.240173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.231 [2024-07-24 22:15:54.240188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.231 qpair failed and we were unable to recover it. 00:28:15.231 22:15:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:15.231 [2024-07-24 22:15:54.240358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.231 [2024-07-24 22:15:54.240373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.231 qpair failed and we were unable to recover it. 00:28:15.231 [2024-07-24 22:15:54.240603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.231 [2024-07-24 22:15:54.240617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.231 qpair failed and we were unable to recover it. 00:28:15.231 [2024-07-24 22:15:54.240796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.231 [2024-07-24 22:15:54.240810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.231 qpair failed and we were unable to recover it. 00:28:15.231 [2024-07-24 22:15:54.240962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.231 [2024-07-24 22:15:54.240976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.231 qpair failed and we were unable to recover it. 00:28:15.231 [2024-07-24 22:15:54.241217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.231 [2024-07-24 22:15:54.241231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.231 qpair failed and we were unable to recover it. 00:28:15.231 [2024-07-24 22:15:54.241537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.231 [2024-07-24 22:15:54.241550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.231 qpair failed and we were unable to recover it. 00:28:15.231 [2024-07-24 22:15:54.241803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.231 [2024-07-24 22:15:54.241817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.231 qpair failed and we were unable to recover it. 00:28:15.231 [2024-07-24 22:15:54.242003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.231 [2024-07-24 22:15:54.242017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.231 qpair failed and we were unable to recover it. 00:28:15.231 [2024-07-24 22:15:54.242210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.231 [2024-07-24 22:15:54.242224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.231 qpair failed and we were unable to recover it. 00:28:15.231 [2024-07-24 22:15:54.242400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.231 [2024-07-24 22:15:54.242414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.231 qpair failed and we were unable to recover it. 00:28:15.231 [2024-07-24 22:15:54.242646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.231 [2024-07-24 22:15:54.242660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.231 qpair failed and we were unable to recover it. 00:28:15.231 [2024-07-24 22:15:54.242894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.231 [2024-07-24 22:15:54.242908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.231 qpair failed and we were unable to recover it. 00:28:15.231 [2024-07-24 22:15:54.243101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.231 [2024-07-24 22:15:54.243114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.231 qpair failed and we were unable to recover it. 00:28:15.231 [2024-07-24 22:15:54.243301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.232 [2024-07-24 22:15:54.243315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.232 qpair failed and we were unable to recover it. 00:28:15.232 [2024-07-24 22:15:54.243568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.232 [2024-07-24 22:15:54.243581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.232 qpair failed and we were unable to recover it. 00:28:15.232 [2024-07-24 22:15:54.243762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.232 [2024-07-24 22:15:54.243776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.232 qpair failed and we were unable to recover it. 00:28:15.232 [2024-07-24 22:15:54.244004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.232 [2024-07-24 22:15:54.244018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.232 qpair failed and we were unable to recover it. 00:28:15.232 [2024-07-24 22:15:54.244303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.232 [2024-07-24 22:15:54.244317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.232 qpair failed and we were unable to recover it. 00:28:15.232 [2024-07-24 22:15:54.244479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.232 [2024-07-24 22:15:54.244493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.232 qpair failed and we were unable to recover it. 00:28:15.232 [2024-07-24 22:15:54.244724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.232 [2024-07-24 22:15:54.244737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.232 qpair failed and we were unable to recover it. 00:28:15.232 [2024-07-24 22:15:54.244916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.232 [2024-07-24 22:15:54.244929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.232 qpair failed and we were unable to recover it. 00:28:15.232 [2024-07-24 22:15:54.245176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.232 [2024-07-24 22:15:54.245190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.232 qpair failed and we were unable to recover it. 00:28:15.232 [2024-07-24 22:15:54.245492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.232 [2024-07-24 22:15:54.245505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.232 qpair failed and we were unable to recover it. 00:28:15.232 [2024-07-24 22:15:54.245811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.232 [2024-07-24 22:15:54.245825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.232 qpair failed and we were unable to recover it. 00:28:15.232 [2024-07-24 22:15:54.246109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.232 [2024-07-24 22:15:54.246123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.232 qpair failed and we were unable to recover it. 00:28:15.232 [2024-07-24 22:15:54.246415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.232 [2024-07-24 22:15:54.246428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.232 qpair failed and we were unable to recover it. 00:28:15.232 [2024-07-24 22:15:54.246766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.232 [2024-07-24 22:15:54.246780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.232 qpair failed and we were unable to recover it. 00:28:15.232 [2024-07-24 22:15:54.246969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.232 [2024-07-24 22:15:54.246983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.232 qpair failed and we were unable to recover it. 00:28:15.232 [2024-07-24 22:15:54.247148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.232 [2024-07-24 22:15:54.247161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.232 qpair failed and we were unable to recover it. 00:28:15.232 [2024-07-24 22:15:54.247337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.232 [2024-07-24 22:15:54.247351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.232 qpair failed and we were unable to recover it. 00:28:15.232 [2024-07-24 22:15:54.247666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.232 [2024-07-24 22:15:54.247680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.232 qpair failed and we were unable to recover it. 00:28:15.232 [2024-07-24 22:15:54.247984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.232 [2024-07-24 22:15:54.247999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.232 qpair failed and we were unable to recover it. 00:28:15.232 [2024-07-24 22:15:54.248176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.232 [2024-07-24 22:15:54.248190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.232 qpair failed and we were unable to recover it. 00:28:15.232 [2024-07-24 22:15:54.248358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.232 [2024-07-24 22:15:54.248371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.232 qpair failed and we were unable to recover it. 00:28:15.232 [2024-07-24 22:15:54.248656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.232 [2024-07-24 22:15:54.248670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.232 qpair failed and we were unable to recover it. 00:28:15.232 [2024-07-24 22:15:54.248823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.232 [2024-07-24 22:15:54.248837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.232 qpair failed and we were unable to recover it. 00:28:15.232 [2024-07-24 22:15:54.249026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.232 [2024-07-24 22:15:54.249040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.232 qpair failed and we were unable to recover it. 00:28:15.232 [2024-07-24 22:15:54.249270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.232 [2024-07-24 22:15:54.249284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.232 qpair failed and we were unable to recover it. 00:28:15.232 [2024-07-24 22:15:54.249528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.232 [2024-07-24 22:15:54.249542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.232 qpair failed and we were unable to recover it. 00:28:15.232 [2024-07-24 22:15:54.249791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.232 [2024-07-24 22:15:54.249806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.232 qpair failed and we were unable to recover it. 00:28:15.232 [2024-07-24 22:15:54.250045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.232 [2024-07-24 22:15:54.250058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.232 qpair failed and we were unable to recover it. 00:28:15.232 [2024-07-24 22:15:54.250280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.232 [2024-07-24 22:15:54.250293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.232 qpair failed and we were unable to recover it. 00:28:15.232 [2024-07-24 22:15:54.250617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.232 [2024-07-24 22:15:54.250631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.232 qpair failed and we were unable to recover it. 00:28:15.232 [2024-07-24 22:15:54.250884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.232 [2024-07-24 22:15:54.250898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.232 qpair failed and we were unable to recover it. 00:28:15.232 [2024-07-24 22:15:54.251190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.232 [2024-07-24 22:15:54.251205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.232 qpair failed and we were unable to recover it. 00:28:15.232 [2024-07-24 22:15:54.251445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.232 [2024-07-24 22:15:54.251458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.232 qpair failed and we were unable to recover it. 00:28:15.232 [2024-07-24 22:15:54.251677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.232 [2024-07-24 22:15:54.251692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.232 qpair failed and we were unable to recover it. 00:28:15.232 [2024-07-24 22:15:54.251929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.232 [2024-07-24 22:15:54.251944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.232 qpair failed and we were unable to recover it. 00:28:15.232 [2024-07-24 22:15:54.252182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.232 [2024-07-24 22:15:54.252196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.232 qpair failed and we were unable to recover it. 00:28:15.233 [2024-07-24 22:15:54.252349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-07-24 22:15:54.252362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-07-24 22:15:54.252622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-07-24 22:15:54.252638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-07-24 22:15:54.252880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-07-24 22:15:54.252897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-07-24 22:15:54.253136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-07-24 22:15:54.253151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-07-24 22:15:54.253328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-07-24 22:15:54.253343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-07-24 22:15:54.253576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-07-24 22:15:54.253590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-07-24 22:15:54.253742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-07-24 22:15:54.253758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-07-24 22:15:54.254054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-07-24 22:15:54.254069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-07-24 22:15:54.254357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-07-24 22:15:54.254373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-07-24 22:15:54.254612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-07-24 22:15:54.254627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-07-24 22:15:54.254849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-07-24 22:15:54.254864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-07-24 22:15:54.255104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-07-24 22:15:54.255118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-07-24 22:15:54.255378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-07-24 22:15:54.255392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-07-24 22:15:54.255648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-07-24 22:15:54.255662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-07-24 22:15:54.255970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-07-24 22:15:54.255984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-07-24 22:15:54.256228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-07-24 22:15:54.256242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-07-24 22:15:54.256428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-07-24 22:15:54.256442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-07-24 22:15:54.256756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-07-24 22:15:54.256770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-07-24 22:15:54.257054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-07-24 22:15:54.257067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-07-24 22:15:54.257286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-07-24 22:15:54.257301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-07-24 22:15:54.257554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-07-24 22:15:54.257568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-07-24 22:15:54.257782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-07-24 22:15:54.257796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d70000b90 with addr=10.0.0.2, port=4420 00:28:15.233 Malloc0 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-07-24 22:15:54.258171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-07-24 22:15:54.258213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 22:15:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.233 [2024-07-24 22:15:54.258593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-07-24 22:15:54.258631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 22:15:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:15.233 [2024-07-24 22:15:54.258945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-07-24 22:15:54.258966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-07-24 22:15:54.259237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 22:15:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.233 [2024-07-24 22:15:54.259255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-07-24 22:15:54.259487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-07-24 22:15:54.259505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.233 22:15:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-07-24 22:15:54.259836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-07-24 22:15:54.259861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-07-24 22:15:54.260137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-07-24 22:15:54.260155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-07-24 22:15:54.260481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-07-24 22:15:54.260499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-07-24 22:15:54.260686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-07-24 22:15:54.260703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-07-24 22:15:54.260971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-07-24 22:15:54.260989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-07-24 22:15:54.261309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-07-24 22:15:54.261327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-07-24 22:15:54.261681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-07-24 22:15:54.261699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-07-24 22:15:54.261942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-07-24 22:15:54.261959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-07-24 22:15:54.262282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-07-24 22:15:54.262299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-07-24 22:15:54.262526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-07-24 22:15:54.262544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-07-24 22:15:54.262723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-07-24 22:15:54.262741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-07-24 22:15:54.263014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-07-24 22:15:54.263032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-07-24 22:15:54.263348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-07-24 22:15:54.263365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-07-24 22:15:54.263674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-07-24 22:15:54.263691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-07-24 22:15:54.264047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-07-24 22:15:54.264065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-07-24 22:15:54.264383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-07-24 22:15:54.264400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-07-24 22:15:54.264722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-07-24 22:15:54.264741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-07-24 22:15:54.265094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-07-24 22:15:54.265112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-07-24 22:15:54.265276] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:15.234 [2024-07-24 22:15:54.265374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-07-24 22:15:54.265390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-07-24 22:15:54.265643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-07-24 22:15:54.265661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-07-24 22:15:54.265957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-07-24 22:15:54.265975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-07-24 22:15:54.266164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-07-24 22:15:54.266182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-07-24 22:15:54.266441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-07-24 22:15:54.266458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-07-24 22:15:54.266781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-07-24 22:15:54.266799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-07-24 22:15:54.267029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-07-24 22:15:54.267046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-07-24 22:15:54.267297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-07-24 22:15:54.267314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-07-24 22:15:54.267630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-07-24 22:15:54.267647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-07-24 22:15:54.267917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-07-24 22:15:54.267935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-07-24 22:15:54.268251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-07-24 22:15:54.268268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-07-24 22:15:54.268526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-07-24 22:15:54.268544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-07-24 22:15:54.268826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-07-24 22:15:54.268844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-07-24 22:15:54.269175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-07-24 22:15:54.269192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-07-24 22:15:54.269486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-07-24 22:15:54.269504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-07-24 22:15:54.269796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-07-24 22:15:54.269814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-07-24 22:15:54.270079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-07-24 22:15:54.270096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-07-24 22:15:54.270339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-07-24 22:15:54.270357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-07-24 22:15:54.270609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-07-24 22:15:54.270627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-07-24 22:15:54.270934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-07-24 22:15:54.270952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-07-24 22:15:54.271275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-07-24 22:15:54.271293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-07-24 22:15:54.271569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-07-24 22:15:54.271587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2d64000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-07-24 22:15:54.271868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-07-24 22:15:54.271891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-07-24 22:15:54.272085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-07-24 22:15:54.272103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-07-24 22:15:54.272403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-07-24 22:15:54.272421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-07-24 22:15:54.272617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-07-24 22:15:54.272634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-07-24 22:15:54.272879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-07-24 22:15:54.272899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-07-24 22:15:54.273246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-07-24 22:15:54.273264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-07-24 22:15:54.273529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-07-24 22:15:54.273546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-07-24 22:15:54.273795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-07-24 22:15:54.273813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 22:15:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.235 [2024-07-24 22:15:54.274110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-07-24 22:15:54.274128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-07-24 22:15:54.274395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 22:15:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:15.235 [2024-07-24 22:15:54.274413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-07-24 22:15:54.274651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-07-24 22:15:54.274669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 22:15:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.235 [2024-07-24 22:15:54.274920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-07-24 22:15:54.274939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 22:15:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:15.235 [2024-07-24 22:15:54.275250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-07-24 22:15:54.275268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-07-24 22:15:54.275529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-07-24 22:15:54.275547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-07-24 22:15:54.275880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-07-24 22:15:54.275899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-07-24 22:15:54.276229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-07-24 22:15:54.276247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-07-24 22:15:54.276492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-07-24 22:15:54.276510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-07-24 22:15:54.276839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-07-24 22:15:54.276858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-07-24 22:15:54.277155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-07-24 22:15:54.277173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-07-24 22:15:54.277475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-07-24 22:15:54.277493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-07-24 22:15:54.277788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-07-24 22:15:54.277807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-07-24 22:15:54.278128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-07-24 22:15:54.278146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-07-24 22:15:54.278484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-07-24 22:15:54.278501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-07-24 22:15:54.278841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-07-24 22:15:54.278859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-07-24 22:15:54.279175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-07-24 22:15:54.279193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-07-24 22:15:54.279434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-07-24 22:15:54.279456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-07-24 22:15:54.279704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-07-24 22:15:54.279728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-07-24 22:15:54.280030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-07-24 22:15:54.280047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-07-24 22:15:54.280352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-07-24 22:15:54.280369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-07-24 22:15:54.280685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-07-24 22:15:54.280702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-07-24 22:15:54.281006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-07-24 22:15:54.281023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-07-24 22:15:54.281343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-07-24 22:15:54.281361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-07-24 22:15:54.281703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-07-24 22:15:54.281725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-07-24 22:15:54.282038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-07-24 22:15:54.282055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-07-24 22:15:54.282308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-07-24 22:15:54.282326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-07-24 22:15:54.282668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-07-24 22:15:54.282686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-07-24 22:15:54.283016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-07-24 22:15:54.283034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-07-24 22:15:54.283211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-07-24 22:15:54.283228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-07-24 22:15:54.283478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-07-24 22:15:54.283496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-07-24 22:15:54.283820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-07-24 22:15:54.283839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-07-24 22:15:54.284135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-07-24 22:15:54.284153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-07-24 22:15:54.284469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-07-24 22:15:54.284487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-07-24 22:15:54.284655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-07-24 22:15:54.284673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-07-24 22:15:54.284996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-07-24 22:15:54.285014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-07-24 22:15:54.285313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-07-24 22:15:54.285332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-07-24 22:15:54.285647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-07-24 22:15:54.285665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-07-24 22:15:54.285903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-07-24 22:15:54.285921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 22:15:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.236 [2024-07-24 22:15:54.286170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-07-24 22:15:54.286188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 22:15:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:15.236 [2024-07-24 22:15:54.286450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-07-24 22:15:54.286468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 22:15:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.236 [2024-07-24 22:15:54.286708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-07-24 22:15:54.286740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 22:15:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:15.236 [2024-07-24 22:15:54.287036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-07-24 22:15:54.287054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-07-24 22:15:54.287287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-07-24 22:15:54.287305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-07-24 22:15:54.287651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-07-24 22:15:54.287669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-07-24 22:15:54.287962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-07-24 22:15:54.287980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-07-24 22:15:54.288300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-07-24 22:15:54.288317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-07-24 22:15:54.288555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-07-24 22:15:54.288573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-07-24 22:15:54.288890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-07-24 22:15:54.288909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-07-24 22:15:54.289228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-07-24 22:15:54.289246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-07-24 22:15:54.289541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-07-24 22:15:54.289559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-07-24 22:15:54.289784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-07-24 22:15:54.289802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-07-24 22:15:54.290120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-07-24 22:15:54.290137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-07-24 22:15:54.290447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-07-24 22:15:54.290465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-07-24 22:15:54.290820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-07-24 22:15:54.290837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-07-24 22:15:54.291084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-07-24 22:15:54.291101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-07-24 22:15:54.291398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-07-24 22:15:54.291415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-07-24 22:15:54.291659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-07-24 22:15:54.291676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-07-24 22:15:54.291931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-07-24 22:15:54.291949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-07-24 22:15:54.292239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-07-24 22:15:54.292257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-07-24 22:15:54.292555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-07-24 22:15:54.292573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-07-24 22:15:54.292920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-07-24 22:15:54.292938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-07-24 22:15:54.293238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-07-24 22:15:54.293256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-07-24 22:15:54.293581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-07-24 22:15:54.293599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-07-24 22:15:54.293863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-07-24 22:15:54.293881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 22:15:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.237 [2024-07-24 22:15:54.294202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-07-24 22:15:54.294220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 22:15:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:15.237 [2024-07-24 22:15:54.294568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-07-24 22:15:54.294586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 22:15:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.237 [2024-07-24 22:15:54.294854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-07-24 22:15:54.294872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 22:15:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:15.237 [2024-07-24 22:15:54.295139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-07-24 22:15:54.295158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-07-24 22:15:54.295390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-07-24 22:15:54.295407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-07-24 22:15:54.295704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-07-24 22:15:54.295726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-07-24 22:15:54.296020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-07-24 22:15:54.296038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-07-24 22:15:54.296265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-07-24 22:15:54.296283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-07-24 22:15:54.296601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-07-24 22:15:54.296619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-07-24 22:15:54.296921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-07-24 22:15:54.296940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-07-24 22:15:54.297260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-07-24 22:15:54.297278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-07-24 22:15:54.297453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-07-24 22:15:54.297472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f71a0 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-07-24 22:15:54.297534] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:15.237 22:15:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.237 22:15:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:15.237 22:15:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.237 22:15:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:15.237 [2024-07-24 22:15:54.305892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.237 [2024-07-24 22:15:54.306007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.237 [2024-07-24 22:15:54.306033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.237 [2024-07-24 22:15:54.306047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.237 [2024-07-24 22:15:54.306063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.237 [2024-07-24 22:15:54.306092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 22:15:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.237 22:15:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2852792 00:28:15.237 [2024-07-24 22:15:54.315905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.237 [2024-07-24 22:15:54.315995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.237 [2024-07-24 22:15:54.316016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.237 [2024-07-24 22:15:54.316028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.237 [2024-07-24 22:15:54.316039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.237 [2024-07-24 22:15:54.316060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-07-24 22:15:54.325893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.237 [2024-07-24 22:15:54.325976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.237 [2024-07-24 22:15:54.325994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.237 [2024-07-24 22:15:54.326005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.237 [2024-07-24 22:15:54.326013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.237 [2024-07-24 22:15:54.326033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-07-24 22:15:54.335884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.237 [2024-07-24 22:15:54.335989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.237 [2024-07-24 22:15:54.336008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.238 [2024-07-24 22:15:54.336019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.238 [2024-07-24 22:15:54.336028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.238 [2024-07-24 22:15:54.336047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.238 [2024-07-24 22:15:54.345847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.238 [2024-07-24 22:15:54.345931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.238 [2024-07-24 22:15:54.345950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.238 [2024-07-24 22:15:54.345960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.238 [2024-07-24 22:15:54.345969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.238 [2024-07-24 22:15:54.345988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.238 [2024-07-24 22:15:54.355875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.238 [2024-07-24 22:15:54.355954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.238 [2024-07-24 22:15:54.355973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.238 [2024-07-24 22:15:54.355983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.238 [2024-07-24 22:15:54.355992] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.238 [2024-07-24 22:15:54.356010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.238 [2024-07-24 22:15:54.365914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.238 [2024-07-24 22:15:54.365989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.238 [2024-07-24 22:15:54.366007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.238 [2024-07-24 22:15:54.366017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.238 [2024-07-24 22:15:54.366026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.238 [2024-07-24 22:15:54.366043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.238 [2024-07-24 22:15:54.375889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.238 [2024-07-24 22:15:54.376014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.238 [2024-07-24 22:15:54.376032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.238 [2024-07-24 22:15:54.376042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.238 [2024-07-24 22:15:54.376051] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.238 [2024-07-24 22:15:54.376068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.238 [2024-07-24 22:15:54.385932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.238 [2024-07-24 22:15:54.386012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.238 [2024-07-24 22:15:54.386030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.238 [2024-07-24 22:15:54.386039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.238 [2024-07-24 22:15:54.386048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.238 [2024-07-24 22:15:54.386065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.238 [2024-07-24 22:15:54.395960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.238 [2024-07-24 22:15:54.396041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.238 [2024-07-24 22:15:54.396062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.238 [2024-07-24 22:15:54.396073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.238 [2024-07-24 22:15:54.396081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.238 [2024-07-24 22:15:54.396099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.238 [2024-07-24 22:15:54.405972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.238 [2024-07-24 22:15:54.406054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.238 [2024-07-24 22:15:54.406072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.238 [2024-07-24 22:15:54.406081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.238 [2024-07-24 22:15:54.406090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.238 [2024-07-24 22:15:54.406107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.499 [2024-07-24 22:15:54.415994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.499 [2024-07-24 22:15:54.416074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.499 [2024-07-24 22:15:54.416092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.499 [2024-07-24 22:15:54.416104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.499 [2024-07-24 22:15:54.416113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.499 [2024-07-24 22:15:54.416130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.499 qpair failed and we were unable to recover it. 00:28:15.499 [2024-07-24 22:15:54.426056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.499 [2024-07-24 22:15:54.426142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.499 [2024-07-24 22:15:54.426160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.499 [2024-07-24 22:15:54.426170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.499 [2024-07-24 22:15:54.426179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.499 [2024-07-24 22:15:54.426196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.499 qpair failed and we were unable to recover it. 00:28:15.499 [2024-07-24 22:15:54.436111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.499 [2024-07-24 22:15:54.436189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.499 [2024-07-24 22:15:54.436206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.499 [2024-07-24 22:15:54.436216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.499 [2024-07-24 22:15:54.436225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.499 [2024-07-24 22:15:54.436245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.499 qpair failed and we were unable to recover it. 00:28:15.499 [2024-07-24 22:15:54.446126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.499 [2024-07-24 22:15:54.446204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.499 [2024-07-24 22:15:54.446222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.499 [2024-07-24 22:15:54.446232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.499 [2024-07-24 22:15:54.446241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.499 [2024-07-24 22:15:54.446258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.499 qpair failed and we were unable to recover it. 00:28:15.499 [2024-07-24 22:15:54.456127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.499 [2024-07-24 22:15:54.456205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.499 [2024-07-24 22:15:54.456222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.499 [2024-07-24 22:15:54.456232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.499 [2024-07-24 22:15:54.456241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.499 [2024-07-24 22:15:54.456258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.499 qpair failed and we were unable to recover it. 00:28:15.499 [2024-07-24 22:15:54.466106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.499 [2024-07-24 22:15:54.466188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.499 [2024-07-24 22:15:54.466205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.499 [2024-07-24 22:15:54.466215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.499 [2024-07-24 22:15:54.466223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.499 [2024-07-24 22:15:54.466240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.499 qpair failed and we were unable to recover it. 00:28:15.499 [2024-07-24 22:15:54.476215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.499 [2024-07-24 22:15:54.476295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.499 [2024-07-24 22:15:54.476315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.499 [2024-07-24 22:15:54.476327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.499 [2024-07-24 22:15:54.476337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.500 [2024-07-24 22:15:54.476354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.500 qpair failed and we were unable to recover it. 00:28:15.500 [2024-07-24 22:15:54.486244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.500 [2024-07-24 22:15:54.486398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.500 [2024-07-24 22:15:54.486418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.500 [2024-07-24 22:15:54.486428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.500 [2024-07-24 22:15:54.486437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.500 [2024-07-24 22:15:54.486454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.500 qpair failed and we were unable to recover it. 00:28:15.500 [2024-07-24 22:15:54.496243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.500 [2024-07-24 22:15:54.496326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.500 [2024-07-24 22:15:54.496344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.500 [2024-07-24 22:15:54.496354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.500 [2024-07-24 22:15:54.496363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.500 [2024-07-24 22:15:54.496380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.500 qpair failed and we were unable to recover it. 00:28:15.500 [2024-07-24 22:15:54.506241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.500 [2024-07-24 22:15:54.506322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.500 [2024-07-24 22:15:54.506340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.500 [2024-07-24 22:15:54.506350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.500 [2024-07-24 22:15:54.506359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.500 [2024-07-24 22:15:54.506375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.500 qpair failed and we were unable to recover it. 00:28:15.500 [2024-07-24 22:15:54.516329] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.500 [2024-07-24 22:15:54.516406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.500 [2024-07-24 22:15:54.516424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.500 [2024-07-24 22:15:54.516434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.500 [2024-07-24 22:15:54.516443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.500 [2024-07-24 22:15:54.516461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.500 qpair failed and we were unable to recover it. 00:28:15.500 [2024-07-24 22:15:54.526339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.500 [2024-07-24 22:15:54.526420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.500 [2024-07-24 22:15:54.526438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.500 [2024-07-24 22:15:54.526448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.500 [2024-07-24 22:15:54.526457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.500 [2024-07-24 22:15:54.526477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.500 qpair failed and we were unable to recover it. 00:28:15.500 [2024-07-24 22:15:54.536338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.500 [2024-07-24 22:15:54.536418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.500 [2024-07-24 22:15:54.536436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.500 [2024-07-24 22:15:54.536446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.500 [2024-07-24 22:15:54.536455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.500 [2024-07-24 22:15:54.536472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.500 qpair failed and we were unable to recover it. 00:28:15.500 [2024-07-24 22:15:54.546563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.500 [2024-07-24 22:15:54.546661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.500 [2024-07-24 22:15:54.546679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.500 [2024-07-24 22:15:54.546689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.500 [2024-07-24 22:15:54.546697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.500 [2024-07-24 22:15:54.546719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.500 qpair failed and we were unable to recover it. 00:28:15.500 [2024-07-24 22:15:54.556525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.500 [2024-07-24 22:15:54.556680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.500 [2024-07-24 22:15:54.556697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.500 [2024-07-24 22:15:54.556707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.500 [2024-07-24 22:15:54.556721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.500 [2024-07-24 22:15:54.556739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.500 qpair failed and we were unable to recover it. 00:28:15.500 [2024-07-24 22:15:54.566539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.500 [2024-07-24 22:15:54.566619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.500 [2024-07-24 22:15:54.566636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.500 [2024-07-24 22:15:54.566646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.500 [2024-07-24 22:15:54.566654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.500 [2024-07-24 22:15:54.566671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.500 qpair failed and we were unable to recover it. 00:28:15.500 [2024-07-24 22:15:54.576494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.500 [2024-07-24 22:15:54.576570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.500 [2024-07-24 22:15:54.576592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.500 [2024-07-24 22:15:54.576602] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.500 [2024-07-24 22:15:54.576611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.500 [2024-07-24 22:15:54.576628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.500 qpair failed and we were unable to recover it. 00:28:15.500 [2024-07-24 22:15:54.586544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.500 [2024-07-24 22:15:54.586621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.500 [2024-07-24 22:15:54.586639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.500 [2024-07-24 22:15:54.586649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.500 [2024-07-24 22:15:54.586658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.500 [2024-07-24 22:15:54.586675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.500 qpair failed and we were unable to recover it. 00:28:15.500 [2024-07-24 22:15:54.596532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.500 [2024-07-24 22:15:54.596611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.500 [2024-07-24 22:15:54.596628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.500 [2024-07-24 22:15:54.596638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.500 [2024-07-24 22:15:54.596648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.500 [2024-07-24 22:15:54.596664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.500 qpair failed and we were unable to recover it. 00:28:15.500 [2024-07-24 22:15:54.606578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.500 [2024-07-24 22:15:54.606658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.500 [2024-07-24 22:15:54.606676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.500 [2024-07-24 22:15:54.606686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.500 [2024-07-24 22:15:54.606694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.500 [2024-07-24 22:15:54.606711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.500 qpair failed and we were unable to recover it. 00:28:15.501 [2024-07-24 22:15:54.616645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.501 [2024-07-24 22:15:54.616835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.501 [2024-07-24 22:15:54.616853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.501 [2024-07-24 22:15:54.616864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.501 [2024-07-24 22:15:54.616876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.501 [2024-07-24 22:15:54.616894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.501 qpair failed and we were unable to recover it. 00:28:15.501 [2024-07-24 22:15:54.626612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.501 [2024-07-24 22:15:54.626690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.501 [2024-07-24 22:15:54.626708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.501 [2024-07-24 22:15:54.626724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.501 [2024-07-24 22:15:54.626733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.501 [2024-07-24 22:15:54.626750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.501 qpair failed and we were unable to recover it. 00:28:15.501 [2024-07-24 22:15:54.636660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.501 [2024-07-24 22:15:54.636742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.501 [2024-07-24 22:15:54.636760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.501 [2024-07-24 22:15:54.636770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.501 [2024-07-24 22:15:54.636779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.501 [2024-07-24 22:15:54.636796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.501 qpair failed and we were unable to recover it. 00:28:15.501 [2024-07-24 22:15:54.646692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.501 [2024-07-24 22:15:54.646774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.501 [2024-07-24 22:15:54.646792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.501 [2024-07-24 22:15:54.646802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.501 [2024-07-24 22:15:54.646810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.501 [2024-07-24 22:15:54.646827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.501 qpair failed and we were unable to recover it. 00:28:15.501 [2024-07-24 22:15:54.656698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.501 [2024-07-24 22:15:54.656860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.501 [2024-07-24 22:15:54.656880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.501 [2024-07-24 22:15:54.656890] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.501 [2024-07-24 22:15:54.656899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.501 [2024-07-24 22:15:54.656917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.501 qpair failed and we were unable to recover it. 00:28:15.501 [2024-07-24 22:15:54.666732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.501 [2024-07-24 22:15:54.666830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.501 [2024-07-24 22:15:54.666849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.501 [2024-07-24 22:15:54.666860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.501 [2024-07-24 22:15:54.666868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.501 [2024-07-24 22:15:54.666886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.501 qpair failed and we were unable to recover it. 00:28:15.501 [2024-07-24 22:15:54.676699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.501 [2024-07-24 22:15:54.676791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.501 [2024-07-24 22:15:54.676809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.501 [2024-07-24 22:15:54.676819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.501 [2024-07-24 22:15:54.676828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.501 [2024-07-24 22:15:54.676845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.501 qpair failed and we were unable to recover it. 00:28:15.501 [2024-07-24 22:15:54.686780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.501 [2024-07-24 22:15:54.686860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.501 [2024-07-24 22:15:54.686879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.501 [2024-07-24 22:15:54.686889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.501 [2024-07-24 22:15:54.686897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.501 [2024-07-24 22:15:54.686915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.501 qpair failed and we were unable to recover it. 00:28:15.501 [2024-07-24 22:15:54.696831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.501 [2024-07-24 22:15:54.696914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.501 [2024-07-24 22:15:54.696932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.501 [2024-07-24 22:15:54.696942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.501 [2024-07-24 22:15:54.696950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.501 [2024-07-24 22:15:54.696968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.501 qpair failed and we were unable to recover it. 00:28:15.501 [2024-07-24 22:15:54.706845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.501 [2024-07-24 22:15:54.706927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.501 [2024-07-24 22:15:54.706945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.501 [2024-07-24 22:15:54.706955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.501 [2024-07-24 22:15:54.706967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.501 [2024-07-24 22:15:54.706984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.501 qpair failed and we were unable to recover it. 00:28:15.762 [2024-07-24 22:15:54.716883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.762 [2024-07-24 22:15:54.716965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.762 [2024-07-24 22:15:54.716983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.762 [2024-07-24 22:15:54.716994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.762 [2024-07-24 22:15:54.717002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.762 [2024-07-24 22:15:54.717020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.762 qpair failed and we were unable to recover it. 00:28:15.762 [2024-07-24 22:15:54.726920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.762 [2024-07-24 22:15:54.727080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.762 [2024-07-24 22:15:54.727098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.762 [2024-07-24 22:15:54.727108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.762 [2024-07-24 22:15:54.727117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.762 [2024-07-24 22:15:54.727135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.762 qpair failed and we were unable to recover it. 00:28:15.762 [2024-07-24 22:15:54.736952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.762 [2024-07-24 22:15:54.737032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.762 [2024-07-24 22:15:54.737050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.762 [2024-07-24 22:15:54.737060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.762 [2024-07-24 22:15:54.737069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.762 [2024-07-24 22:15:54.737086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.762 qpair failed and we were unable to recover it. 00:28:15.762 [2024-07-24 22:15:54.746958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.762 [2024-07-24 22:15:54.747047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.762 [2024-07-24 22:15:54.747064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.762 [2024-07-24 22:15:54.747074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.762 [2024-07-24 22:15:54.747083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.762 [2024-07-24 22:15:54.747100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.762 qpair failed and we were unable to recover it. 00:28:15.762 [2024-07-24 22:15:54.757014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.762 [2024-07-24 22:15:54.757096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.762 [2024-07-24 22:15:54.757114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.762 [2024-07-24 22:15:54.757124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.762 [2024-07-24 22:15:54.757133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.762 [2024-07-24 22:15:54.757150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.762 qpair failed and we were unable to recover it. 00:28:15.762 [2024-07-24 22:15:54.767035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.762 [2024-07-24 22:15:54.767145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.763 [2024-07-24 22:15:54.767163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.763 [2024-07-24 22:15:54.767172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.763 [2024-07-24 22:15:54.767181] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.763 [2024-07-24 22:15:54.767199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.763 qpair failed and we were unable to recover it. 00:28:15.763 [2024-07-24 22:15:54.776971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.763 [2024-07-24 22:15:54.777049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.763 [2024-07-24 22:15:54.777066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.763 [2024-07-24 22:15:54.777076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.763 [2024-07-24 22:15:54.777085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.763 [2024-07-24 22:15:54.777102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.763 qpair failed and we were unable to recover it. 00:28:15.763 [2024-07-24 22:15:54.787089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.763 [2024-07-24 22:15:54.787170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.763 [2024-07-24 22:15:54.787187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.763 [2024-07-24 22:15:54.787197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.763 [2024-07-24 22:15:54.787206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.763 [2024-07-24 22:15:54.787222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.763 qpair failed and we were unable to recover it. 00:28:15.763 [2024-07-24 22:15:54.797115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.763 [2024-07-24 22:15:54.797198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.763 [2024-07-24 22:15:54.797215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.763 [2024-07-24 22:15:54.797225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.763 [2024-07-24 22:15:54.797237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.763 [2024-07-24 22:15:54.797254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.763 qpair failed and we were unable to recover it. 00:28:15.763 [2024-07-24 22:15:54.807103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.763 [2024-07-24 22:15:54.807183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.763 [2024-07-24 22:15:54.807201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.763 [2024-07-24 22:15:54.807211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.763 [2024-07-24 22:15:54.807219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.763 [2024-07-24 22:15:54.807236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.763 qpair failed and we were unable to recover it. 00:28:15.763 [2024-07-24 22:15:54.817148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.763 [2024-07-24 22:15:54.817229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.763 [2024-07-24 22:15:54.817247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.763 [2024-07-24 22:15:54.817257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.763 [2024-07-24 22:15:54.817266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.763 [2024-07-24 22:15:54.817283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.763 qpair failed and we were unable to recover it. 00:28:15.763 [2024-07-24 22:15:54.827301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.763 [2024-07-24 22:15:54.827461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.763 [2024-07-24 22:15:54.827479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.763 [2024-07-24 22:15:54.827489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.763 [2024-07-24 22:15:54.827498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.763 [2024-07-24 22:15:54.827515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.763 qpair failed and we were unable to recover it. 00:28:15.763 [2024-07-24 22:15:54.837227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.763 [2024-07-24 22:15:54.837310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.763 [2024-07-24 22:15:54.837327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.763 [2024-07-24 22:15:54.837337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.763 [2024-07-24 22:15:54.837346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.763 [2024-07-24 22:15:54.837363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.763 qpair failed and we were unable to recover it. 00:28:15.763 [2024-07-24 22:15:54.847227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.763 [2024-07-24 22:15:54.847306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.763 [2024-07-24 22:15:54.847324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.763 [2024-07-24 22:15:54.847334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.763 [2024-07-24 22:15:54.847343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.763 [2024-07-24 22:15:54.847359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.763 qpair failed and we were unable to recover it. 00:28:15.763 [2024-07-24 22:15:54.857224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.763 [2024-07-24 22:15:54.857392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.763 [2024-07-24 22:15:54.857409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.763 [2024-07-24 22:15:54.857419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.763 [2024-07-24 22:15:54.857428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.763 [2024-07-24 22:15:54.857447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.763 qpair failed and we were unable to recover it. 00:28:15.763 [2024-07-24 22:15:54.867286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.763 [2024-07-24 22:15:54.867367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.763 [2024-07-24 22:15:54.867385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.763 [2024-07-24 22:15:54.867395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.763 [2024-07-24 22:15:54.867404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.763 [2024-07-24 22:15:54.867421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.763 qpair failed and we were unable to recover it. 00:28:15.763 [2024-07-24 22:15:54.877360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.763 [2024-07-24 22:15:54.877497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.763 [2024-07-24 22:15:54.877514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.763 [2024-07-24 22:15:54.877524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.763 [2024-07-24 22:15:54.877533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.763 [2024-07-24 22:15:54.877550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.763 qpair failed and we were unable to recover it. 00:28:15.763 [2024-07-24 22:15:54.887307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.763 [2024-07-24 22:15:54.887467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.763 [2024-07-24 22:15:54.887484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.763 [2024-07-24 22:15:54.887497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.763 [2024-07-24 22:15:54.887506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.763 [2024-07-24 22:15:54.887523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.763 qpair failed and we were unable to recover it. 00:28:15.763 [2024-07-24 22:15:54.897368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.763 [2024-07-24 22:15:54.897445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.763 [2024-07-24 22:15:54.897463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.763 [2024-07-24 22:15:54.897473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.764 [2024-07-24 22:15:54.897482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.764 [2024-07-24 22:15:54.897499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.764 qpair failed and we were unable to recover it. 00:28:15.764 [2024-07-24 22:15:54.907347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.764 [2024-07-24 22:15:54.907423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.764 [2024-07-24 22:15:54.907442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.764 [2024-07-24 22:15:54.907452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.764 [2024-07-24 22:15:54.907461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.764 [2024-07-24 22:15:54.907478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.764 qpair failed and we were unable to recover it. 00:28:15.764 [2024-07-24 22:15:54.917444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.764 [2024-07-24 22:15:54.917523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.764 [2024-07-24 22:15:54.917541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.764 [2024-07-24 22:15:54.917552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.764 [2024-07-24 22:15:54.917561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.764 [2024-07-24 22:15:54.917577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.764 qpair failed and we were unable to recover it. 00:28:15.764 [2024-07-24 22:15:54.927484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.764 [2024-07-24 22:15:54.927559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.764 [2024-07-24 22:15:54.927577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.764 [2024-07-24 22:15:54.927587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.764 [2024-07-24 22:15:54.927596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.764 [2024-07-24 22:15:54.927614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.764 qpair failed and we were unable to recover it. 00:28:15.764 [2024-07-24 22:15:54.937506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.764 [2024-07-24 22:15:54.937584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.764 [2024-07-24 22:15:54.937603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.764 [2024-07-24 22:15:54.937613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.764 [2024-07-24 22:15:54.937622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.764 [2024-07-24 22:15:54.937638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.764 qpair failed and we were unable to recover it. 00:28:15.764 [2024-07-24 22:15:54.947550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.764 [2024-07-24 22:15:54.947634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.764 [2024-07-24 22:15:54.947651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.764 [2024-07-24 22:15:54.947662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.764 [2024-07-24 22:15:54.947670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.764 [2024-07-24 22:15:54.947687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.764 qpair failed and we were unable to recover it. 00:28:15.764 [2024-07-24 22:15:54.957551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.764 [2024-07-24 22:15:54.957630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.764 [2024-07-24 22:15:54.957648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.764 [2024-07-24 22:15:54.957657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.764 [2024-07-24 22:15:54.957666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.764 [2024-07-24 22:15:54.957683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.764 qpair failed and we were unable to recover it. 00:28:15.764 [2024-07-24 22:15:54.967592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.764 [2024-07-24 22:15:54.967755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.764 [2024-07-24 22:15:54.967773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.764 [2024-07-24 22:15:54.967783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.764 [2024-07-24 22:15:54.967791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:15.764 [2024-07-24 22:15:54.967809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.764 qpair failed and we were unable to recover it. 00:28:16.025 [2024-07-24 22:15:54.977621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.025 [2024-07-24 22:15:54.977707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.025 [2024-07-24 22:15:54.977729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.025 [2024-07-24 22:15:54.977743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.025 [2024-07-24 22:15:54.977753] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.025 [2024-07-24 22:15:54.977771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.025 qpair failed and we were unable to recover it. 00:28:16.025 [2024-07-24 22:15:54.987658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.025 [2024-07-24 22:15:54.987745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.025 [2024-07-24 22:15:54.987763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.025 [2024-07-24 22:15:54.987774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.025 [2024-07-24 22:15:54.987783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.025 [2024-07-24 22:15:54.987800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.025 qpair failed and we were unable to recover it. 00:28:16.025 [2024-07-24 22:15:54.997613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.025 [2024-07-24 22:15:54.997693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.025 [2024-07-24 22:15:54.997711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.025 [2024-07-24 22:15:54.997725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.025 [2024-07-24 22:15:54.997734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.025 [2024-07-24 22:15:54.997752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.025 qpair failed and we were unable to recover it. 00:28:16.025 [2024-07-24 22:15:55.007687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.025 [2024-07-24 22:15:55.007770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.025 [2024-07-24 22:15:55.007789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.025 [2024-07-24 22:15:55.007799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.025 [2024-07-24 22:15:55.007807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.025 [2024-07-24 22:15:55.007825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.025 qpair failed and we were unable to recover it. 00:28:16.025 [2024-07-24 22:15:55.017751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.025 [2024-07-24 22:15:55.017833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.025 [2024-07-24 22:15:55.017852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.026 [2024-07-24 22:15:55.017862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.026 [2024-07-24 22:15:55.017871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.026 [2024-07-24 22:15:55.017888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.026 qpair failed and we were unable to recover it. 00:28:16.026 [2024-07-24 22:15:55.027777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.026 [2024-07-24 22:15:55.027861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.026 [2024-07-24 22:15:55.027879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.026 [2024-07-24 22:15:55.027889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.026 [2024-07-24 22:15:55.027898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.026 [2024-07-24 22:15:55.027915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.026 qpair failed and we were unable to recover it. 00:28:16.026 [2024-07-24 22:15:55.037799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.026 [2024-07-24 22:15:55.037877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.026 [2024-07-24 22:15:55.037894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.026 [2024-07-24 22:15:55.037904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.026 [2024-07-24 22:15:55.037913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.026 [2024-07-24 22:15:55.037930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.026 qpair failed and we were unable to recover it. 00:28:16.026 [2024-07-24 22:15:55.047896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.026 [2024-07-24 22:15:55.047978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.026 [2024-07-24 22:15:55.047996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.026 [2024-07-24 22:15:55.048006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.026 [2024-07-24 22:15:55.048015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.026 [2024-07-24 22:15:55.048032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.026 qpair failed and we were unable to recover it. 00:28:16.026 [2024-07-24 22:15:55.057803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.026 [2024-07-24 22:15:55.057886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.026 [2024-07-24 22:15:55.057904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.026 [2024-07-24 22:15:55.057915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.026 [2024-07-24 22:15:55.057925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.026 [2024-07-24 22:15:55.057943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.026 qpair failed and we were unable to recover it. 00:28:16.026 [2024-07-24 22:15:55.067813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.026 [2024-07-24 22:15:55.067961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.026 [2024-07-24 22:15:55.067978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.026 [2024-07-24 22:15:55.067991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.026 [2024-07-24 22:15:55.068000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.026 [2024-07-24 22:15:55.068017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.026 qpair failed and we were unable to recover it. 00:28:16.026 [2024-07-24 22:15:55.077930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.026 [2024-07-24 22:15:55.078007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.026 [2024-07-24 22:15:55.078025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.026 [2024-07-24 22:15:55.078035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.026 [2024-07-24 22:15:55.078043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.026 [2024-07-24 22:15:55.078061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.026 qpair failed and we were unable to recover it. 00:28:16.026 [2024-07-24 22:15:55.087956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.026 [2024-07-24 22:15:55.088036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.026 [2024-07-24 22:15:55.088055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.026 [2024-07-24 22:15:55.088065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.026 [2024-07-24 22:15:55.088073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.026 [2024-07-24 22:15:55.088090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.026 qpair failed and we were unable to recover it. 00:28:16.026 [2024-07-24 22:15:55.097965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.026 [2024-07-24 22:15:55.098047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.026 [2024-07-24 22:15:55.098065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.026 [2024-07-24 22:15:55.098075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.026 [2024-07-24 22:15:55.098084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.026 [2024-07-24 22:15:55.098101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.026 qpair failed and we were unable to recover it. 00:28:16.026 [2024-07-24 22:15:55.107992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.026 [2024-07-24 22:15:55.108072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.026 [2024-07-24 22:15:55.108089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.026 [2024-07-24 22:15:55.108098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.026 [2024-07-24 22:15:55.108107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.026 [2024-07-24 22:15:55.108124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.026 qpair failed and we were unable to recover it. 00:28:16.026 [2024-07-24 22:15:55.118046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.026 [2024-07-24 22:15:55.118126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.026 [2024-07-24 22:15:55.118144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.026 [2024-07-24 22:15:55.118153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.026 [2024-07-24 22:15:55.118162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.026 [2024-07-24 22:15:55.118179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.026 qpair failed and we were unable to recover it. 00:28:16.026 [2024-07-24 22:15:55.128009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.026 [2024-07-24 22:15:55.128122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.026 [2024-07-24 22:15:55.128139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.026 [2024-07-24 22:15:55.128149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.026 [2024-07-24 22:15:55.128158] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.026 [2024-07-24 22:15:55.128175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.026 qpair failed and we were unable to recover it. 00:28:16.026 [2024-07-24 22:15:55.138110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.026 [2024-07-24 22:15:55.138187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.026 [2024-07-24 22:15:55.138205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.026 [2024-07-24 22:15:55.138215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.026 [2024-07-24 22:15:55.138224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.026 [2024-07-24 22:15:55.138241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.026 qpair failed and we were unable to recover it. 00:28:16.026 [2024-07-24 22:15:55.148132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.026 [2024-07-24 22:15:55.148217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.026 [2024-07-24 22:15:55.148235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.026 [2024-07-24 22:15:55.148245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.026 [2024-07-24 22:15:55.148254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.026 [2024-07-24 22:15:55.148271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.027 qpair failed and we were unable to recover it. 00:28:16.027 [2024-07-24 22:15:55.158098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.027 [2024-07-24 22:15:55.158177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.027 [2024-07-24 22:15:55.158197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.027 [2024-07-24 22:15:55.158207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.027 [2024-07-24 22:15:55.158216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.027 [2024-07-24 22:15:55.158232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.027 qpair failed and we were unable to recover it. 00:28:16.027 [2024-07-24 22:15:55.168171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.027 [2024-07-24 22:15:55.168253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.027 [2024-07-24 22:15:55.168270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.027 [2024-07-24 22:15:55.168281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.027 [2024-07-24 22:15:55.168289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.027 [2024-07-24 22:15:55.168306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.027 qpair failed and we were unable to recover it. 00:28:16.027 [2024-07-24 22:15:55.178203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.027 [2024-07-24 22:15:55.178281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.027 [2024-07-24 22:15:55.178299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.027 [2024-07-24 22:15:55.178309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.027 [2024-07-24 22:15:55.178317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.027 [2024-07-24 22:15:55.178335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.027 qpair failed and we were unable to recover it. 00:28:16.027 [2024-07-24 22:15:55.188256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.027 [2024-07-24 22:15:55.188335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.027 [2024-07-24 22:15:55.188353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.027 [2024-07-24 22:15:55.188363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.027 [2024-07-24 22:15:55.188372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.027 [2024-07-24 22:15:55.188389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.027 qpair failed and we were unable to recover it. 00:28:16.027 [2024-07-24 22:15:55.198305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.027 [2024-07-24 22:15:55.198386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.027 [2024-07-24 22:15:55.198404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.027 [2024-07-24 22:15:55.198414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.027 [2024-07-24 22:15:55.198422] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.027 [2024-07-24 22:15:55.198443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.027 qpair failed and we were unable to recover it. 00:28:16.027 [2024-07-24 22:15:55.208243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.027 [2024-07-24 22:15:55.208332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.027 [2024-07-24 22:15:55.208349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.027 [2024-07-24 22:15:55.208359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.027 [2024-07-24 22:15:55.208368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.027 [2024-07-24 22:15:55.208385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.027 qpair failed and we were unable to recover it. 00:28:16.027 [2024-07-24 22:15:55.218349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.027 [2024-07-24 22:15:55.218428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.027 [2024-07-24 22:15:55.218445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.027 [2024-07-24 22:15:55.218455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.027 [2024-07-24 22:15:55.218464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.027 [2024-07-24 22:15:55.218481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.027 qpair failed and we were unable to recover it. 00:28:16.027 [2024-07-24 22:15:55.228286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.027 [2024-07-24 22:15:55.228363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.027 [2024-07-24 22:15:55.228385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.027 [2024-07-24 22:15:55.228395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.027 [2024-07-24 22:15:55.228404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.027 [2024-07-24 22:15:55.228421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.027 qpair failed and we were unable to recover it. 00:28:16.288 [2024-07-24 22:15:55.238372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.288 [2024-07-24 22:15:55.238448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.288 [2024-07-24 22:15:55.238466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.288 [2024-07-24 22:15:55.238476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.288 [2024-07-24 22:15:55.238485] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.288 [2024-07-24 22:15:55.238501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.288 qpair failed and we were unable to recover it. 00:28:16.288 [2024-07-24 22:15:55.248423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.288 [2024-07-24 22:15:55.248496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.288 [2024-07-24 22:15:55.248516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.288 [2024-07-24 22:15:55.248526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.288 [2024-07-24 22:15:55.248535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.288 [2024-07-24 22:15:55.248553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.288 qpair failed and we were unable to recover it. 00:28:16.288 [2024-07-24 22:15:55.258456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.288 [2024-07-24 22:15:55.258543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.288 [2024-07-24 22:15:55.258560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.288 [2024-07-24 22:15:55.258570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.288 [2024-07-24 22:15:55.258579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.288 [2024-07-24 22:15:55.258596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.288 qpair failed and we were unable to recover it. 00:28:16.288 [2024-07-24 22:15:55.268477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.288 [2024-07-24 22:15:55.268555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.288 [2024-07-24 22:15:55.268572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.288 [2024-07-24 22:15:55.268583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.288 [2024-07-24 22:15:55.268592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.288 [2024-07-24 22:15:55.268608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.288 qpair failed and we were unable to recover it. 00:28:16.288 [2024-07-24 22:15:55.278480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.288 [2024-07-24 22:15:55.278559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.288 [2024-07-24 22:15:55.278577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.288 [2024-07-24 22:15:55.278587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.288 [2024-07-24 22:15:55.278596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.288 [2024-07-24 22:15:55.278612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.288 qpair failed and we were unable to recover it. 00:28:16.288 [2024-07-24 22:15:55.288539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.288 [2024-07-24 22:15:55.288618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.288 [2024-07-24 22:15:55.288635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.288 [2024-07-24 22:15:55.288645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.288 [2024-07-24 22:15:55.288654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.288 [2024-07-24 22:15:55.288674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.288 qpair failed and we were unable to recover it. 00:28:16.288 [2024-07-24 22:15:55.298570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.288 [2024-07-24 22:15:55.298652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.288 [2024-07-24 22:15:55.298670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.288 [2024-07-24 22:15:55.298680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.288 [2024-07-24 22:15:55.298689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.288 [2024-07-24 22:15:55.298705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.288 qpair failed and we were unable to recover it. 00:28:16.288 [2024-07-24 22:15:55.308594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.288 [2024-07-24 22:15:55.308673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.288 [2024-07-24 22:15:55.308690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.288 [2024-07-24 22:15:55.308700] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.288 [2024-07-24 22:15:55.308709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.288 [2024-07-24 22:15:55.308731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.288 qpair failed and we were unable to recover it. 00:28:16.288 [2024-07-24 22:15:55.318594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.288 [2024-07-24 22:15:55.318756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.288 [2024-07-24 22:15:55.318774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.288 [2024-07-24 22:15:55.318784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.288 [2024-07-24 22:15:55.318792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.288 [2024-07-24 22:15:55.318810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.288 qpair failed and we were unable to recover it. 00:28:16.288 [2024-07-24 22:15:55.328652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.288 [2024-07-24 22:15:55.328735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.288 [2024-07-24 22:15:55.328752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.288 [2024-07-24 22:15:55.328762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.288 [2024-07-24 22:15:55.328771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.288 [2024-07-24 22:15:55.328788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.288 qpair failed and we were unable to recover it. 00:28:16.288 [2024-07-24 22:15:55.338637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.288 [2024-07-24 22:15:55.338712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.289 [2024-07-24 22:15:55.338737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.289 [2024-07-24 22:15:55.338747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.289 [2024-07-24 22:15:55.338756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.289 [2024-07-24 22:15:55.338774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.289 qpair failed and we were unable to recover it. 00:28:16.289 [2024-07-24 22:15:55.348698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.289 [2024-07-24 22:15:55.348781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.289 [2024-07-24 22:15:55.348798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.289 [2024-07-24 22:15:55.348808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.289 [2024-07-24 22:15:55.348817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.289 [2024-07-24 22:15:55.348834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.289 qpair failed and we were unable to recover it. 00:28:16.289 [2024-07-24 22:15:55.358700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.289 [2024-07-24 22:15:55.358784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.289 [2024-07-24 22:15:55.358802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.289 [2024-07-24 22:15:55.358812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.289 [2024-07-24 22:15:55.358820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.289 [2024-07-24 22:15:55.358837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.289 qpair failed and we were unable to recover it. 00:28:16.289 [2024-07-24 22:15:55.368748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.289 [2024-07-24 22:15:55.368831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.289 [2024-07-24 22:15:55.368849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.289 [2024-07-24 22:15:55.368859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.289 [2024-07-24 22:15:55.368868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.289 [2024-07-24 22:15:55.368885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.289 qpair failed and we were unable to recover it. 00:28:16.289 [2024-07-24 22:15:55.378784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.289 [2024-07-24 22:15:55.378902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.289 [2024-07-24 22:15:55.378920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.289 [2024-07-24 22:15:55.378929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.289 [2024-07-24 22:15:55.378938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.289 [2024-07-24 22:15:55.378958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.289 qpair failed and we were unable to recover it. 00:28:16.289 [2024-07-24 22:15:55.388814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.289 [2024-07-24 22:15:55.388893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.289 [2024-07-24 22:15:55.388911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.289 [2024-07-24 22:15:55.388921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.289 [2024-07-24 22:15:55.388930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.289 [2024-07-24 22:15:55.388946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.289 qpair failed and we were unable to recover it. 00:28:16.289 [2024-07-24 22:15:55.398815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.289 [2024-07-24 22:15:55.398892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.289 [2024-07-24 22:15:55.398910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.289 [2024-07-24 22:15:55.398920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.289 [2024-07-24 22:15:55.398928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.289 [2024-07-24 22:15:55.398945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.289 qpair failed and we were unable to recover it. 00:28:16.289 [2024-07-24 22:15:55.408883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.289 [2024-07-24 22:15:55.408962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.289 [2024-07-24 22:15:55.408979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.289 [2024-07-24 22:15:55.408989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.289 [2024-07-24 22:15:55.408998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.289 [2024-07-24 22:15:55.409015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.289 qpair failed and we were unable to recover it. 00:28:16.289 [2024-07-24 22:15:55.418898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.289 [2024-07-24 22:15:55.418973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.289 [2024-07-24 22:15:55.418990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.289 [2024-07-24 22:15:55.419000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.289 [2024-07-24 22:15:55.419009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.289 [2024-07-24 22:15:55.419026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.289 qpair failed and we were unable to recover it. 00:28:16.289 [2024-07-24 22:15:55.428931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.289 [2024-07-24 22:15:55.429012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.289 [2024-07-24 22:15:55.429032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.289 [2024-07-24 22:15:55.429042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.289 [2024-07-24 22:15:55.429051] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.289 [2024-07-24 22:15:55.429068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.289 qpair failed and we were unable to recover it. 00:28:16.289 [2024-07-24 22:15:55.438942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.289 [2024-07-24 22:15:55.439020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.289 [2024-07-24 22:15:55.439037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.289 [2024-07-24 22:15:55.439047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.289 [2024-07-24 22:15:55.439055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.289 [2024-07-24 22:15:55.439072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.289 qpair failed and we were unable to recover it. 00:28:16.289 [2024-07-24 22:15:55.448985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.289 [2024-07-24 22:15:55.449063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.289 [2024-07-24 22:15:55.449081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.289 [2024-07-24 22:15:55.449090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.289 [2024-07-24 22:15:55.449099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.289 [2024-07-24 22:15:55.449116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.289 qpair failed and we were unable to recover it. 00:28:16.289 [2024-07-24 22:15:55.459052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.289 [2024-07-24 22:15:55.459132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.289 [2024-07-24 22:15:55.459150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.289 [2024-07-24 22:15:55.459160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.289 [2024-07-24 22:15:55.459169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.289 [2024-07-24 22:15:55.459186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.289 qpair failed and we were unable to recover it. 00:28:16.289 [2024-07-24 22:15:55.469037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.289 [2024-07-24 22:15:55.469117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.289 [2024-07-24 22:15:55.469135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.289 [2024-07-24 22:15:55.469144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.290 [2024-07-24 22:15:55.469156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.290 [2024-07-24 22:15:55.469173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.290 qpair failed and we were unable to recover it. 00:28:16.290 [2024-07-24 22:15:55.479062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.290 [2024-07-24 22:15:55.479161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.290 [2024-07-24 22:15:55.479178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.290 [2024-07-24 22:15:55.479188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.290 [2024-07-24 22:15:55.479196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.290 [2024-07-24 22:15:55.479213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.290 qpair failed and we were unable to recover it. 00:28:16.290 [2024-07-24 22:15:55.489075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.290 [2024-07-24 22:15:55.489154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.290 [2024-07-24 22:15:55.489172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.290 [2024-07-24 22:15:55.489182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.290 [2024-07-24 22:15:55.489191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.290 [2024-07-24 22:15:55.489207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.290 qpair failed and we were unable to recover it. 00:28:16.290 [2024-07-24 22:15:55.499058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.290 [2024-07-24 22:15:55.499136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.290 [2024-07-24 22:15:55.499153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.290 [2024-07-24 22:15:55.499163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.290 [2024-07-24 22:15:55.499172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.290 [2024-07-24 22:15:55.499188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.290 qpair failed and we were unable to recover it. 00:28:16.551 [2024-07-24 22:15:55.509173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.551 [2024-07-24 22:15:55.509247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.551 [2024-07-24 22:15:55.509265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.551 [2024-07-24 22:15:55.509276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.551 [2024-07-24 22:15:55.509284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.551 [2024-07-24 22:15:55.509302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.551 qpair failed and we were unable to recover it. 00:28:16.551 [2024-07-24 22:15:55.519195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.551 [2024-07-24 22:15:55.519274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.551 [2024-07-24 22:15:55.519291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.551 [2024-07-24 22:15:55.519301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.551 [2024-07-24 22:15:55.519310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.551 [2024-07-24 22:15:55.519327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.551 qpair failed and we were unable to recover it. 00:28:16.551 [2024-07-24 22:15:55.529131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.551 [2024-07-24 22:15:55.529217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.551 [2024-07-24 22:15:55.529235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.551 [2024-07-24 22:15:55.529245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.551 [2024-07-24 22:15:55.529253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.551 [2024-07-24 22:15:55.529270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.551 qpair failed and we were unable to recover it. 00:28:16.551 [2024-07-24 22:15:55.539241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.551 [2024-07-24 22:15:55.539397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.551 [2024-07-24 22:15:55.539414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.551 [2024-07-24 22:15:55.539424] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.551 [2024-07-24 22:15:55.539432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.551 [2024-07-24 22:15:55.539449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.551 qpair failed and we were unable to recover it. 00:28:16.551 [2024-07-24 22:15:55.549265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.551 [2024-07-24 22:15:55.549340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.551 [2024-07-24 22:15:55.549358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.551 [2024-07-24 22:15:55.549367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.551 [2024-07-24 22:15:55.549376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.551 [2024-07-24 22:15:55.549393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.551 qpair failed and we were unable to recover it. 00:28:16.551 [2024-07-24 22:15:55.559301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.551 [2024-07-24 22:15:55.559393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.551 [2024-07-24 22:15:55.559410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.551 [2024-07-24 22:15:55.559419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.551 [2024-07-24 22:15:55.559431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.551 [2024-07-24 22:15:55.559448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.551 qpair failed and we were unable to recover it. 00:28:16.551 [2024-07-24 22:15:55.569369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.551 [2024-07-24 22:15:55.569455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.551 [2024-07-24 22:15:55.569473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.551 [2024-07-24 22:15:55.569483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.551 [2024-07-24 22:15:55.569491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.551 [2024-07-24 22:15:55.569508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.551 qpair failed and we were unable to recover it. 00:28:16.551 [2024-07-24 22:15:55.579345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.551 [2024-07-24 22:15:55.579424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.551 [2024-07-24 22:15:55.579441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.551 [2024-07-24 22:15:55.579451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.551 [2024-07-24 22:15:55.579461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.551 [2024-07-24 22:15:55.579477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.551 qpair failed and we were unable to recover it. 00:28:16.551 [2024-07-24 22:15:55.589394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.551 [2024-07-24 22:15:55.589484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.551 [2024-07-24 22:15:55.589501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.551 [2024-07-24 22:15:55.589511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.551 [2024-07-24 22:15:55.589520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.551 [2024-07-24 22:15:55.589536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.551 qpair failed and we were unable to recover it. 00:28:16.551 [2024-07-24 22:15:55.599409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.551 [2024-07-24 22:15:55.599484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.551 [2024-07-24 22:15:55.599501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.551 [2024-07-24 22:15:55.599511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.551 [2024-07-24 22:15:55.599520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.551 [2024-07-24 22:15:55.599537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.551 qpair failed and we were unable to recover it. 00:28:16.551 [2024-07-24 22:15:55.609438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.551 [2024-07-24 22:15:55.609515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.551 [2024-07-24 22:15:55.609533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.551 [2024-07-24 22:15:55.609543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.551 [2024-07-24 22:15:55.609551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.552 [2024-07-24 22:15:55.609568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.552 qpair failed and we were unable to recover it. 00:28:16.552 [2024-07-24 22:15:55.619461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.552 [2024-07-24 22:15:55.619538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.552 [2024-07-24 22:15:55.619556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.552 [2024-07-24 22:15:55.619566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.552 [2024-07-24 22:15:55.619574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.552 [2024-07-24 22:15:55.619591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.552 qpair failed and we were unable to recover it. 00:28:16.552 [2024-07-24 22:15:55.629489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.552 [2024-07-24 22:15:55.629567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.552 [2024-07-24 22:15:55.629585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.552 [2024-07-24 22:15:55.629595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.552 [2024-07-24 22:15:55.629604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.552 [2024-07-24 22:15:55.629621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.552 qpair failed and we were unable to recover it. 00:28:16.552 [2024-07-24 22:15:55.639534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.552 [2024-07-24 22:15:55.639638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.552 [2024-07-24 22:15:55.639655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.552 [2024-07-24 22:15:55.639665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.552 [2024-07-24 22:15:55.639673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.552 [2024-07-24 22:15:55.639691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.552 qpair failed and we were unable to recover it. 00:28:16.552 [2024-07-24 22:15:55.649537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.552 [2024-07-24 22:15:55.649610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.552 [2024-07-24 22:15:55.649627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.552 [2024-07-24 22:15:55.649640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.552 [2024-07-24 22:15:55.649649] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.552 [2024-07-24 22:15:55.649666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.552 qpair failed and we were unable to recover it. 00:28:16.552 [2024-07-24 22:15:55.659597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.552 [2024-07-24 22:15:55.659677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.552 [2024-07-24 22:15:55.659695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.552 [2024-07-24 22:15:55.659704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.552 [2024-07-24 22:15:55.659713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.552 [2024-07-24 22:15:55.659734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.552 qpair failed and we were unable to recover it. 00:28:16.552 [2024-07-24 22:15:55.669586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.552 [2024-07-24 22:15:55.669677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.552 [2024-07-24 22:15:55.669695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.552 [2024-07-24 22:15:55.669705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.552 [2024-07-24 22:15:55.669719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.552 [2024-07-24 22:15:55.669736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.552 qpair failed and we were unable to recover it. 00:28:16.552 [2024-07-24 22:15:55.679635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.552 [2024-07-24 22:15:55.679748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.552 [2024-07-24 22:15:55.679766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.552 [2024-07-24 22:15:55.679775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.552 [2024-07-24 22:15:55.679784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.552 [2024-07-24 22:15:55.679801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.552 qpair failed and we were unable to recover it. 00:28:16.552 [2024-07-24 22:15:55.689654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.552 [2024-07-24 22:15:55.689737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.552 [2024-07-24 22:15:55.689757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.552 [2024-07-24 22:15:55.689767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.552 [2024-07-24 22:15:55.689776] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.552 [2024-07-24 22:15:55.689794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.552 qpair failed and we were unable to recover it. 00:28:16.552 [2024-07-24 22:15:55.699675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.552 [2024-07-24 22:15:55.699767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.552 [2024-07-24 22:15:55.699786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.552 [2024-07-24 22:15:55.699796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.552 [2024-07-24 22:15:55.699804] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.552 [2024-07-24 22:15:55.699821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.552 qpair failed and we were unable to recover it. 00:28:16.552 [2024-07-24 22:15:55.709663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.552 [2024-07-24 22:15:55.709788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.552 [2024-07-24 22:15:55.709805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.552 [2024-07-24 22:15:55.709815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.552 [2024-07-24 22:15:55.709824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.552 [2024-07-24 22:15:55.709841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.552 qpair failed and we were unable to recover it. 00:28:16.552 [2024-07-24 22:15:55.719750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.552 [2024-07-24 22:15:55.719833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.552 [2024-07-24 22:15:55.719851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.552 [2024-07-24 22:15:55.719861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.552 [2024-07-24 22:15:55.719869] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.552 [2024-07-24 22:15:55.719886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.552 qpair failed and we were unable to recover it. 00:28:16.552 [2024-07-24 22:15:55.729781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.552 [2024-07-24 22:15:55.729858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.552 [2024-07-24 22:15:55.729876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.552 [2024-07-24 22:15:55.729886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.552 [2024-07-24 22:15:55.729895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.552 [2024-07-24 22:15:55.729911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.553 qpair failed and we were unable to recover it. 00:28:16.553 [2024-07-24 22:15:55.739810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.553 [2024-07-24 22:15:55.739888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.553 [2024-07-24 22:15:55.739905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.553 [2024-07-24 22:15:55.739918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.553 [2024-07-24 22:15:55.739927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.553 [2024-07-24 22:15:55.739945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.553 qpair failed and we were unable to recover it. 00:28:16.553 [2024-07-24 22:15:55.749835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.553 [2024-07-24 22:15:55.749915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.553 [2024-07-24 22:15:55.749933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.553 [2024-07-24 22:15:55.749943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.553 [2024-07-24 22:15:55.749951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.553 [2024-07-24 22:15:55.749968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.553 qpair failed and we were unable to recover it. 00:28:16.553 [2024-07-24 22:15:55.759902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.553 [2024-07-24 22:15:55.759998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.553 [2024-07-24 22:15:55.760016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.553 [2024-07-24 22:15:55.760026] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.553 [2024-07-24 22:15:55.760034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.553 [2024-07-24 22:15:55.760052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.553 qpair failed and we were unable to recover it. 00:28:16.813 [2024-07-24 22:15:55.769892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.813 [2024-07-24 22:15:55.769972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.813 [2024-07-24 22:15:55.769989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.813 [2024-07-24 22:15:55.769999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.813 [2024-07-24 22:15:55.770008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.813 [2024-07-24 22:15:55.770025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.813 qpair failed and we were unable to recover it. 00:28:16.813 [2024-07-24 22:15:55.779932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.813 [2024-07-24 22:15:55.780034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.813 [2024-07-24 22:15:55.780051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.813 [2024-07-24 22:15:55.780061] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.813 [2024-07-24 22:15:55.780070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.813 [2024-07-24 22:15:55.780086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.813 qpair failed and we were unable to recover it. 00:28:16.813 [2024-07-24 22:15:55.789854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.813 [2024-07-24 22:15:55.789937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.813 [2024-07-24 22:15:55.789954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.813 [2024-07-24 22:15:55.789964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.813 [2024-07-24 22:15:55.789972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.813 [2024-07-24 22:15:55.789989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.813 qpair failed and we were unable to recover it. 00:28:16.813 [2024-07-24 22:15:55.799980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.813 [2024-07-24 22:15:55.800097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.813 [2024-07-24 22:15:55.800115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.813 [2024-07-24 22:15:55.800125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.813 [2024-07-24 22:15:55.800133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.813 [2024-07-24 22:15:55.800150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.813 qpair failed and we were unable to recover it. 00:28:16.813 [2024-07-24 22:15:55.809976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.813 [2024-07-24 22:15:55.810055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.813 [2024-07-24 22:15:55.810073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.813 [2024-07-24 22:15:55.810083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.814 [2024-07-24 22:15:55.810092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.814 [2024-07-24 22:15:55.810108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-07-24 22:15:55.819975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.814 [2024-07-24 22:15:55.820063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.814 [2024-07-24 22:15:55.820080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.814 [2024-07-24 22:15:55.820090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.814 [2024-07-24 22:15:55.820099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.814 [2024-07-24 22:15:55.820115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-07-24 22:15:55.830062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.814 [2024-07-24 22:15:55.830143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.814 [2024-07-24 22:15:55.830160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.814 [2024-07-24 22:15:55.830173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.814 [2024-07-24 22:15:55.830182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.814 [2024-07-24 22:15:55.830200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-07-24 22:15:55.840078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.814 [2024-07-24 22:15:55.840192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.814 [2024-07-24 22:15:55.840210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.814 [2024-07-24 22:15:55.840220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.814 [2024-07-24 22:15:55.840228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.814 [2024-07-24 22:15:55.840245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-07-24 22:15:55.850099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.814 [2024-07-24 22:15:55.850175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.814 [2024-07-24 22:15:55.850192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.814 [2024-07-24 22:15:55.850202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.814 [2024-07-24 22:15:55.850211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.814 [2024-07-24 22:15:55.850227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-07-24 22:15:55.860182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.814 [2024-07-24 22:15:55.860288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.814 [2024-07-24 22:15:55.860305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.814 [2024-07-24 22:15:55.860315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.814 [2024-07-24 22:15:55.860324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.814 [2024-07-24 22:15:55.860341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-07-24 22:15:55.870163] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.814 [2024-07-24 22:15:55.870243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.814 [2024-07-24 22:15:55.870261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.814 [2024-07-24 22:15:55.870270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.814 [2024-07-24 22:15:55.870279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.814 [2024-07-24 22:15:55.870296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-07-24 22:15:55.880196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.814 [2024-07-24 22:15:55.880274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.814 [2024-07-24 22:15:55.880292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.814 [2024-07-24 22:15:55.880302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.814 [2024-07-24 22:15:55.880310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.814 [2024-07-24 22:15:55.880327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-07-24 22:15:55.890226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.814 [2024-07-24 22:15:55.890306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.814 [2024-07-24 22:15:55.890323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.814 [2024-07-24 22:15:55.890333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.814 [2024-07-24 22:15:55.890342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.814 [2024-07-24 22:15:55.890358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-07-24 22:15:55.900251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.814 [2024-07-24 22:15:55.900330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.814 [2024-07-24 22:15:55.900347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.814 [2024-07-24 22:15:55.900357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.814 [2024-07-24 22:15:55.900366] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.814 [2024-07-24 22:15:55.900382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-07-24 22:15:55.910195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.814 [2024-07-24 22:15:55.910275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.814 [2024-07-24 22:15:55.910293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.814 [2024-07-24 22:15:55.910302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.814 [2024-07-24 22:15:55.910311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.814 [2024-07-24 22:15:55.910327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-07-24 22:15:55.920297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.814 [2024-07-24 22:15:55.920373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.814 [2024-07-24 22:15:55.920394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.814 [2024-07-24 22:15:55.920404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.814 [2024-07-24 22:15:55.920413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.814 [2024-07-24 22:15:55.920430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-07-24 22:15:55.930315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.814 [2024-07-24 22:15:55.930390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.814 [2024-07-24 22:15:55.930408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.814 [2024-07-24 22:15:55.930417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.814 [2024-07-24 22:15:55.930426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.814 [2024-07-24 22:15:55.930443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-07-24 22:15:55.940347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.814 [2024-07-24 22:15:55.940425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.814 [2024-07-24 22:15:55.940442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.814 [2024-07-24 22:15:55.940452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.814 [2024-07-24 22:15:55.940461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.815 [2024-07-24 22:15:55.940478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.815 qpair failed and we were unable to recover it. 00:28:16.815 [2024-07-24 22:15:55.950405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.815 [2024-07-24 22:15:55.950518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.815 [2024-07-24 22:15:55.950536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.815 [2024-07-24 22:15:55.950545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.815 [2024-07-24 22:15:55.950554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.815 [2024-07-24 22:15:55.950570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.815 qpair failed and we were unable to recover it. 00:28:16.815 [2024-07-24 22:15:55.960422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.815 [2024-07-24 22:15:55.960499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.815 [2024-07-24 22:15:55.960516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.815 [2024-07-24 22:15:55.960526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.815 [2024-07-24 22:15:55.960535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.815 [2024-07-24 22:15:55.960555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.815 qpair failed and we were unable to recover it. 00:28:16.815 [2024-07-24 22:15:55.970460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.815 [2024-07-24 22:15:55.970532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.815 [2024-07-24 22:15:55.970550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.815 [2024-07-24 22:15:55.970560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.815 [2024-07-24 22:15:55.970569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.815 [2024-07-24 22:15:55.970586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.815 qpair failed and we were unable to recover it. 00:28:16.815 [2024-07-24 22:15:55.980447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.815 [2024-07-24 22:15:55.980528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.815 [2024-07-24 22:15:55.980545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.815 [2024-07-24 22:15:55.980555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.815 [2024-07-24 22:15:55.980564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.815 [2024-07-24 22:15:55.980580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.815 qpair failed and we were unable to recover it. 00:28:16.815 [2024-07-24 22:15:55.990498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.815 [2024-07-24 22:15:55.990578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.815 [2024-07-24 22:15:55.990595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.815 [2024-07-24 22:15:55.990605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.815 [2024-07-24 22:15:55.990614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.815 [2024-07-24 22:15:55.990630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.815 qpair failed and we were unable to recover it. 00:28:16.815 [2024-07-24 22:15:56.000499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.815 [2024-07-24 22:15:56.000581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.815 [2024-07-24 22:15:56.000599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.815 [2024-07-24 22:15:56.000610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.815 [2024-07-24 22:15:56.000620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.815 [2024-07-24 22:15:56.000637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.815 qpair failed and we were unable to recover it. 00:28:16.815 [2024-07-24 22:15:56.010536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.815 [2024-07-24 22:15:56.010685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.815 [2024-07-24 22:15:56.010706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.815 [2024-07-24 22:15:56.010720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.815 [2024-07-24 22:15:56.010729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.815 [2024-07-24 22:15:56.010746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.815 qpair failed and we were unable to recover it. 00:28:16.815 [2024-07-24 22:15:56.020509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.815 [2024-07-24 22:15:56.020586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.815 [2024-07-24 22:15:56.020604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.815 [2024-07-24 22:15:56.020615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.815 [2024-07-24 22:15:56.020624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:16.815 [2024-07-24 22:15:56.020641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.815 qpair failed and we were unable to recover it. 00:28:17.076 [2024-07-24 22:15:56.030589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.076 [2024-07-24 22:15:56.030671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.076 [2024-07-24 22:15:56.030689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.076 [2024-07-24 22:15:56.030699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.076 [2024-07-24 22:15:56.030708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.076 [2024-07-24 22:15:56.030730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.076 qpair failed and we were unable to recover it. 00:28:17.076 [2024-07-24 22:15:56.040625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.076 [2024-07-24 22:15:56.040705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.076 [2024-07-24 22:15:56.040733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.076 [2024-07-24 22:15:56.040744] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.076 [2024-07-24 22:15:56.040753] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.076 [2024-07-24 22:15:56.040771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.076 qpair failed and we were unable to recover it. 00:28:17.076 [2024-07-24 22:15:56.050664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.076 [2024-07-24 22:15:56.050749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.076 [2024-07-24 22:15:56.050767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.076 [2024-07-24 22:15:56.050777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.076 [2024-07-24 22:15:56.050785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.076 [2024-07-24 22:15:56.050806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.076 qpair failed and we were unable to recover it. 00:28:17.076 [2024-07-24 22:15:56.060684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.076 [2024-07-24 22:15:56.060763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.076 [2024-07-24 22:15:56.060781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.076 [2024-07-24 22:15:56.060790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.076 [2024-07-24 22:15:56.060799] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.076 [2024-07-24 22:15:56.060815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.076 qpair failed and we were unable to recover it. 00:28:17.076 [2024-07-24 22:15:56.070703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.076 [2024-07-24 22:15:56.070795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.076 [2024-07-24 22:15:56.070813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.076 [2024-07-24 22:15:56.070823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.076 [2024-07-24 22:15:56.070831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.076 [2024-07-24 22:15:56.070848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.076 qpair failed and we were unable to recover it. 00:28:17.076 [2024-07-24 22:15:56.080802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.076 [2024-07-24 22:15:56.080880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.076 [2024-07-24 22:15:56.080898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.076 [2024-07-24 22:15:56.080908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.076 [2024-07-24 22:15:56.080917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.076 [2024-07-24 22:15:56.080934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.076 qpair failed and we were unable to recover it. 00:28:17.076 [2024-07-24 22:15:56.090782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.076 [2024-07-24 22:15:56.090874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.076 [2024-07-24 22:15:56.090892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.076 [2024-07-24 22:15:56.090902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.077 [2024-07-24 22:15:56.090910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.077 [2024-07-24 22:15:56.090927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.077 qpair failed and we were unable to recover it. 00:28:17.077 [2024-07-24 22:15:56.100752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.077 [2024-07-24 22:15:56.100838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.077 [2024-07-24 22:15:56.100862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.077 [2024-07-24 22:15:56.100873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.077 [2024-07-24 22:15:56.100883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.077 [2024-07-24 22:15:56.100900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.077 qpair failed and we were unable to recover it. 00:28:17.077 [2024-07-24 22:15:56.110820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.077 [2024-07-24 22:15:56.110900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.077 [2024-07-24 22:15:56.110920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.077 [2024-07-24 22:15:56.110930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.077 [2024-07-24 22:15:56.110940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.077 [2024-07-24 22:15:56.110958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.077 qpair failed and we were unable to recover it. 00:28:17.077 [2024-07-24 22:15:56.120791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.077 [2024-07-24 22:15:56.120871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.077 [2024-07-24 22:15:56.120889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.077 [2024-07-24 22:15:56.120900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.077 [2024-07-24 22:15:56.120909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.077 [2024-07-24 22:15:56.120928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.077 qpair failed and we were unable to recover it. 00:28:17.077 [2024-07-24 22:15:56.130818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.077 [2024-07-24 22:15:56.130895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.077 [2024-07-24 22:15:56.130915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.077 [2024-07-24 22:15:56.130926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.077 [2024-07-24 22:15:56.130936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.077 [2024-07-24 22:15:56.130954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.077 qpair failed and we were unable to recover it. 00:28:17.077 [2024-07-24 22:15:56.140931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.077 [2024-07-24 22:15:56.141014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.077 [2024-07-24 22:15:56.141033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.077 [2024-07-24 22:15:56.141043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.077 [2024-07-24 22:15:56.141053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.077 [2024-07-24 22:15:56.141073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.077 qpair failed and we were unable to recover it. 00:28:17.077 [2024-07-24 22:15:56.150880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.077 [2024-07-24 22:15:56.150958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.077 [2024-07-24 22:15:56.150975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.077 [2024-07-24 22:15:56.150985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.077 [2024-07-24 22:15:56.150994] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.077 [2024-07-24 22:15:56.151012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.077 qpair failed and we were unable to recover it. 00:28:17.077 [2024-07-24 22:15:56.160939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.077 [2024-07-24 22:15:56.161063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.077 [2024-07-24 22:15:56.161084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.077 [2024-07-24 22:15:56.161094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.077 [2024-07-24 22:15:56.161105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.077 [2024-07-24 22:15:56.161124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.077 qpair failed and we were unable to recover it. 00:28:17.077 [2024-07-24 22:15:56.170995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.077 [2024-07-24 22:15:56.171074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.077 [2024-07-24 22:15:56.171093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.077 [2024-07-24 22:15:56.171103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.077 [2024-07-24 22:15:56.171112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.077 [2024-07-24 22:15:56.171129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.077 qpair failed and we were unable to recover it. 00:28:17.077 [2024-07-24 22:15:56.180971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.077 [2024-07-24 22:15:56.181050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.077 [2024-07-24 22:15:56.181069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.077 [2024-07-24 22:15:56.181079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.077 [2024-07-24 22:15:56.181088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.077 [2024-07-24 22:15:56.181105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.077 qpair failed and we were unable to recover it. 00:28:17.077 [2024-07-24 22:15:56.190985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.077 [2024-07-24 22:15:56.191064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.077 [2024-07-24 22:15:56.191086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.077 [2024-07-24 22:15:56.191096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.077 [2024-07-24 22:15:56.191104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.077 [2024-07-24 22:15:56.191122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.077 qpair failed and we were unable to recover it. 00:28:17.077 [2024-07-24 22:15:56.201059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.077 [2024-07-24 22:15:56.201135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.077 [2024-07-24 22:15:56.201153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.077 [2024-07-24 22:15:56.201163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.077 [2024-07-24 22:15:56.201172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.078 [2024-07-24 22:15:56.201189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.078 qpair failed and we were unable to recover it. 00:28:17.078 [2024-07-24 22:15:56.211121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.078 [2024-07-24 22:15:56.211234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.078 [2024-07-24 22:15:56.211253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.078 [2024-07-24 22:15:56.211263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.078 [2024-07-24 22:15:56.211273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.078 [2024-07-24 22:15:56.211292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.078 qpair failed and we were unable to recover it. 00:28:17.078 [2024-07-24 22:15:56.221167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.078 [2024-07-24 22:15:56.221247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.078 [2024-07-24 22:15:56.221266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.078 [2024-07-24 22:15:56.221277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.078 [2024-07-24 22:15:56.221285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.078 [2024-07-24 22:15:56.221303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.078 qpair failed and we were unable to recover it. 00:28:17.078 [2024-07-24 22:15:56.231099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.078 [2024-07-24 22:15:56.231180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.078 [2024-07-24 22:15:56.231198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.078 [2024-07-24 22:15:56.231208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.078 [2024-07-24 22:15:56.231219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.078 [2024-07-24 22:15:56.231236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.078 qpair failed and we were unable to recover it. 00:28:17.078 [2024-07-24 22:15:56.241217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.078 [2024-07-24 22:15:56.241302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.078 [2024-07-24 22:15:56.241320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.078 [2024-07-24 22:15:56.241330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.078 [2024-07-24 22:15:56.241339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.078 [2024-07-24 22:15:56.241357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.078 qpair failed and we were unable to recover it. 00:28:17.078 [2024-07-24 22:15:56.251196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.078 [2024-07-24 22:15:56.251276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.078 [2024-07-24 22:15:56.251293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.078 [2024-07-24 22:15:56.251303] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.078 [2024-07-24 22:15:56.251312] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.078 [2024-07-24 22:15:56.251329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.078 qpair failed and we were unable to recover it. 00:28:17.078 [2024-07-24 22:15:56.261252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.078 [2024-07-24 22:15:56.261330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.078 [2024-07-24 22:15:56.261348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.078 [2024-07-24 22:15:56.261358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.078 [2024-07-24 22:15:56.261367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.078 [2024-07-24 22:15:56.261383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.078 qpair failed and we were unable to recover it. 00:28:17.078 [2024-07-24 22:15:56.271297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.078 [2024-07-24 22:15:56.271383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.078 [2024-07-24 22:15:56.271401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.078 [2024-07-24 22:15:56.271411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.078 [2024-07-24 22:15:56.271420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.078 [2024-07-24 22:15:56.271437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.078 qpair failed and we were unable to recover it. 00:28:17.078 [2024-07-24 22:15:56.281319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.078 [2024-07-24 22:15:56.281401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.078 [2024-07-24 22:15:56.281419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.078 [2024-07-24 22:15:56.281430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.078 [2024-07-24 22:15:56.281438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.078 [2024-07-24 22:15:56.281455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.078 qpair failed and we were unable to recover it. 00:28:17.339 [2024-07-24 22:15:56.291274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.339 [2024-07-24 22:15:56.291354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.339 [2024-07-24 22:15:56.291373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.339 [2024-07-24 22:15:56.291384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.339 [2024-07-24 22:15:56.291393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.339 [2024-07-24 22:15:56.291410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.339 qpair failed and we were unable to recover it. 00:28:17.339 [2024-07-24 22:15:56.301304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.339 [2024-07-24 22:15:56.301381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.339 [2024-07-24 22:15:56.301398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.339 [2024-07-24 22:15:56.301409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.339 [2024-07-24 22:15:56.301418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.339 [2024-07-24 22:15:56.301435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.339 qpair failed and we were unable to recover it. 00:28:17.339 [2024-07-24 22:15:56.311371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.339 [2024-07-24 22:15:56.311450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.339 [2024-07-24 22:15:56.311468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.339 [2024-07-24 22:15:56.311478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.339 [2024-07-24 22:15:56.311487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.339 [2024-07-24 22:15:56.311504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.339 qpair failed and we were unable to recover it. 00:28:17.339 [2024-07-24 22:15:56.321427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.339 [2024-07-24 22:15:56.321508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.339 [2024-07-24 22:15:56.321526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.339 [2024-07-24 22:15:56.321537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.339 [2024-07-24 22:15:56.321549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.339 [2024-07-24 22:15:56.321567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.339 qpair failed and we were unable to recover it. 00:28:17.339 [2024-07-24 22:15:56.331436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.339 [2024-07-24 22:15:56.331516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.339 [2024-07-24 22:15:56.331534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.339 [2024-07-24 22:15:56.331544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.339 [2024-07-24 22:15:56.331553] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.339 [2024-07-24 22:15:56.331570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.339 qpair failed and we were unable to recover it. 00:28:17.339 [2024-07-24 22:15:56.341408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.339 [2024-07-24 22:15:56.341488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.339 [2024-07-24 22:15:56.341508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.339 [2024-07-24 22:15:56.341519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.339 [2024-07-24 22:15:56.341528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.339 [2024-07-24 22:15:56.341546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.339 qpair failed and we were unable to recover it. 00:28:17.339 [2024-07-24 22:15:56.351501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.339 [2024-07-24 22:15:56.351660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.339 [2024-07-24 22:15:56.351679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.339 [2024-07-24 22:15:56.351689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.339 [2024-07-24 22:15:56.351698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.339 [2024-07-24 22:15:56.351723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.339 qpair failed and we were unable to recover it. 00:28:17.339 [2024-07-24 22:15:56.361566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.339 [2024-07-24 22:15:56.361642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.339 [2024-07-24 22:15:56.361661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.339 [2024-07-24 22:15:56.361671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.339 [2024-07-24 22:15:56.361680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.339 [2024-07-24 22:15:56.361697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.339 qpair failed and we were unable to recover it. 00:28:17.339 [2024-07-24 22:15:56.371550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.339 [2024-07-24 22:15:56.371629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.339 [2024-07-24 22:15:56.371647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.339 [2024-07-24 22:15:56.371657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.339 [2024-07-24 22:15:56.371666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.339 [2024-07-24 22:15:56.371683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.339 qpair failed and we were unable to recover it. 00:28:17.339 [2024-07-24 22:15:56.381553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.339 [2024-07-24 22:15:56.381632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.339 [2024-07-24 22:15:56.381650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.339 [2024-07-24 22:15:56.381660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.339 [2024-07-24 22:15:56.381669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.339 [2024-07-24 22:15:56.381686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.339 qpair failed and we were unable to recover it. 00:28:17.339 [2024-07-24 22:15:56.391636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.339 [2024-07-24 22:15:56.391728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.339 [2024-07-24 22:15:56.391749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.339 [2024-07-24 22:15:56.391759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.339 [2024-07-24 22:15:56.391768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.339 [2024-07-24 22:15:56.391786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.339 qpair failed and we were unable to recover it. 00:28:17.339 [2024-07-24 22:15:56.401648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.339 [2024-07-24 22:15:56.401731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.339 [2024-07-24 22:15:56.401749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.339 [2024-07-24 22:15:56.401759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.339 [2024-07-24 22:15:56.401768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.339 [2024-07-24 22:15:56.401784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.339 qpair failed and we were unable to recover it. 00:28:17.339 [2024-07-24 22:15:56.411665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.339 [2024-07-24 22:15:56.411752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.340 [2024-07-24 22:15:56.411769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.340 [2024-07-24 22:15:56.411782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.340 [2024-07-24 22:15:56.411791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.340 [2024-07-24 22:15:56.411809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.340 qpair failed and we were unable to recover it. 00:28:17.340 [2024-07-24 22:15:56.421774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.340 [2024-07-24 22:15:56.421889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.340 [2024-07-24 22:15:56.421908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.340 [2024-07-24 22:15:56.421918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.340 [2024-07-24 22:15:56.421927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.340 [2024-07-24 22:15:56.421944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.340 qpair failed and we were unable to recover it. 00:28:17.340 [2024-07-24 22:15:56.431718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.340 [2024-07-24 22:15:56.431796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.340 [2024-07-24 22:15:56.431814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.340 [2024-07-24 22:15:56.431824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.340 [2024-07-24 22:15:56.431833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.340 [2024-07-24 22:15:56.431850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.340 qpair failed and we were unable to recover it. 00:28:17.340 [2024-07-24 22:15:56.441749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.340 [2024-07-24 22:15:56.441827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.340 [2024-07-24 22:15:56.441846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.340 [2024-07-24 22:15:56.441857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.340 [2024-07-24 22:15:56.441866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.340 [2024-07-24 22:15:56.441884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.340 qpair failed and we were unable to recover it. 00:28:17.340 [2024-07-24 22:15:56.451704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.340 [2024-07-24 22:15:56.451789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.340 [2024-07-24 22:15:56.451807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.340 [2024-07-24 22:15:56.451817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.340 [2024-07-24 22:15:56.451826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.340 [2024-07-24 22:15:56.451842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.340 qpair failed and we were unable to recover it. 00:28:17.340 [2024-07-24 22:15:56.461827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.340 [2024-07-24 22:15:56.461908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.340 [2024-07-24 22:15:56.461926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.340 [2024-07-24 22:15:56.461936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.340 [2024-07-24 22:15:56.461944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.340 [2024-07-24 22:15:56.461962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.340 qpair failed and we were unable to recover it. 00:28:17.340 [2024-07-24 22:15:56.471842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.340 [2024-07-24 22:15:56.471924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.340 [2024-07-24 22:15:56.471942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.340 [2024-07-24 22:15:56.471953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.340 [2024-07-24 22:15:56.471962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.340 [2024-07-24 22:15:56.471979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.340 qpair failed and we were unable to recover it. 00:28:17.340 [2024-07-24 22:15:56.481876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.340 [2024-07-24 22:15:56.481953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.340 [2024-07-24 22:15:56.481971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.340 [2024-07-24 22:15:56.481981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.340 [2024-07-24 22:15:56.481990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.340 [2024-07-24 22:15:56.482007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.340 qpair failed and we were unable to recover it. 00:28:17.340 [2024-07-24 22:15:56.491881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.340 [2024-07-24 22:15:56.491959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.340 [2024-07-24 22:15:56.491977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.340 [2024-07-24 22:15:56.491988] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.340 [2024-07-24 22:15:56.491997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.340 [2024-07-24 22:15:56.492014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.340 qpair failed and we were unable to recover it. 00:28:17.340 [2024-07-24 22:15:56.501895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.340 [2024-07-24 22:15:56.501974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.340 [2024-07-24 22:15:56.501993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.340 [2024-07-24 22:15:56.502006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.340 [2024-07-24 22:15:56.502015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.340 [2024-07-24 22:15:56.502033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.340 qpair failed and we were unable to recover it. 00:28:17.340 [2024-07-24 22:15:56.511923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.340 [2024-07-24 22:15:56.512001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.340 [2024-07-24 22:15:56.512019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.340 [2024-07-24 22:15:56.512029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.340 [2024-07-24 22:15:56.512038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.340 [2024-07-24 22:15:56.512055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.340 qpair failed and we were unable to recover it. 00:28:17.340 [2024-07-24 22:15:56.521980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.340 [2024-07-24 22:15:56.522061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.340 [2024-07-24 22:15:56.522080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.340 [2024-07-24 22:15:56.522090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.340 [2024-07-24 22:15:56.522099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.340 [2024-07-24 22:15:56.522117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.340 qpair failed and we were unable to recover it. 00:28:17.340 [2024-07-24 22:15:56.531954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.340 [2024-07-24 22:15:56.532027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.340 [2024-07-24 22:15:56.532045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.340 [2024-07-24 22:15:56.532054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.340 [2024-07-24 22:15:56.532063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.340 [2024-07-24 22:15:56.532080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.340 qpair failed and we were unable to recover it. 00:28:17.340 [2024-07-24 22:15:56.541989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.340 [2024-07-24 22:15:56.542080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.340 [2024-07-24 22:15:56.542097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.340 [2024-07-24 22:15:56.542107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.341 [2024-07-24 22:15:56.542116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.341 [2024-07-24 22:15:56.542134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.341 qpair failed and we were unable to recover it. 00:28:17.601 [2024-07-24 22:15:56.552128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.601 [2024-07-24 22:15:56.552306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.601 [2024-07-24 22:15:56.552325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.601 [2024-07-24 22:15:56.552335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.601 [2024-07-24 22:15:56.552344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.601 [2024-07-24 22:15:56.552362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.601 qpair failed and we were unable to recover it. 00:28:17.601 [2024-07-24 22:15:56.562122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.601 [2024-07-24 22:15:56.562204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.601 [2024-07-24 22:15:56.562222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.601 [2024-07-24 22:15:56.562232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.601 [2024-07-24 22:15:56.562241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.601 [2024-07-24 22:15:56.562258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.601 qpair failed and we were unable to recover it. 00:28:17.601 [2024-07-24 22:15:56.572167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.601 [2024-07-24 22:15:56.572243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.601 [2024-07-24 22:15:56.572261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.601 [2024-07-24 22:15:56.572271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.601 [2024-07-24 22:15:56.572280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.601 [2024-07-24 22:15:56.572297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.601 qpair failed and we were unable to recover it. 00:28:17.601 [2024-07-24 22:15:56.582223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.601 [2024-07-24 22:15:56.582349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.601 [2024-07-24 22:15:56.582368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.601 [2024-07-24 22:15:56.582378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.602 [2024-07-24 22:15:56.582387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.602 [2024-07-24 22:15:56.582404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.602 qpair failed and we were unable to recover it. 00:28:17.602 [2024-07-24 22:15:56.592115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.602 [2024-07-24 22:15:56.592196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.602 [2024-07-24 22:15:56.592214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.602 [2024-07-24 22:15:56.592228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.602 [2024-07-24 22:15:56.592236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.602 [2024-07-24 22:15:56.592253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.602 qpair failed and we were unable to recover it. 00:28:17.602 [2024-07-24 22:15:56.602220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.602 [2024-07-24 22:15:56.602301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.602 [2024-07-24 22:15:56.602319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.602 [2024-07-24 22:15:56.602329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.602 [2024-07-24 22:15:56.602338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.602 [2024-07-24 22:15:56.602354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.602 qpair failed and we were unable to recover it. 00:28:17.602 [2024-07-24 22:15:56.612253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.602 [2024-07-24 22:15:56.612332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.602 [2024-07-24 22:15:56.612350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.602 [2024-07-24 22:15:56.612360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.602 [2024-07-24 22:15:56.612369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.602 [2024-07-24 22:15:56.612386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.602 qpair failed and we were unable to recover it. 00:28:17.602 [2024-07-24 22:15:56.622255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.602 [2024-07-24 22:15:56.622417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.602 [2024-07-24 22:15:56.622436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.602 [2024-07-24 22:15:56.622445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.602 [2024-07-24 22:15:56.622454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.602 [2024-07-24 22:15:56.622472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.602 qpair failed and we were unable to recover it. 00:28:17.602 [2024-07-24 22:15:56.632275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.602 [2024-07-24 22:15:56.632355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.602 [2024-07-24 22:15:56.632374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.602 [2024-07-24 22:15:56.632384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.602 [2024-07-24 22:15:56.632394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.602 [2024-07-24 22:15:56.632411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.602 qpair failed and we were unable to recover it. 00:28:17.602 [2024-07-24 22:15:56.642329] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.602 [2024-07-24 22:15:56.642444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.602 [2024-07-24 22:15:56.642462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.602 [2024-07-24 22:15:56.642472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.602 [2024-07-24 22:15:56.642481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.602 [2024-07-24 22:15:56.642498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.602 qpair failed and we were unable to recover it. 00:28:17.602 [2024-07-24 22:15:56.652293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.602 [2024-07-24 22:15:56.652389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.602 [2024-07-24 22:15:56.652408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.602 [2024-07-24 22:15:56.652419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.602 [2024-07-24 22:15:56.652428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.602 [2024-07-24 22:15:56.652446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.602 qpair failed and we were unable to recover it. 00:28:17.602 [2024-07-24 22:15:56.662390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.602 [2024-07-24 22:15:56.662466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.602 [2024-07-24 22:15:56.662484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.602 [2024-07-24 22:15:56.662494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.602 [2024-07-24 22:15:56.662503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.602 [2024-07-24 22:15:56.662520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.602 qpair failed and we were unable to recover it. 00:28:17.602 [2024-07-24 22:15:56.672434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.602 [2024-07-24 22:15:56.672514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.602 [2024-07-24 22:15:56.672534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.602 [2024-07-24 22:15:56.672544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.602 [2024-07-24 22:15:56.672554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.602 [2024-07-24 22:15:56.672571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.602 qpair failed and we were unable to recover it. 00:28:17.602 [2024-07-24 22:15:56.682464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.602 [2024-07-24 22:15:56.682547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.602 [2024-07-24 22:15:56.682568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.602 [2024-07-24 22:15:56.682578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.602 [2024-07-24 22:15:56.682587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.602 [2024-07-24 22:15:56.682605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.602 qpair failed and we were unable to recover it. 00:28:17.602 [2024-07-24 22:15:56.692460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.602 [2024-07-24 22:15:56.692541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.602 [2024-07-24 22:15:56.692558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.602 [2024-07-24 22:15:56.692568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.602 [2024-07-24 22:15:56.692577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.602 [2024-07-24 22:15:56.692594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.602 qpair failed and we were unable to recover it. 00:28:17.602 [2024-07-24 22:15:56.702483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.602 [2024-07-24 22:15:56.702571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.602 [2024-07-24 22:15:56.702588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.602 [2024-07-24 22:15:56.702598] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.602 [2024-07-24 22:15:56.702607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.602 [2024-07-24 22:15:56.702623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.602 qpair failed and we were unable to recover it. 00:28:17.602 [2024-07-24 22:15:56.712613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.602 [2024-07-24 22:15:56.712690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.602 [2024-07-24 22:15:56.712708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.602 [2024-07-24 22:15:56.712723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.602 [2024-07-24 22:15:56.712732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.602 [2024-07-24 22:15:56.712751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.602 qpair failed and we were unable to recover it. 00:28:17.603 [2024-07-24 22:15:56.722519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.603 [2024-07-24 22:15:56.722598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.603 [2024-07-24 22:15:56.722617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.603 [2024-07-24 22:15:56.722627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.603 [2024-07-24 22:15:56.722636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.603 [2024-07-24 22:15:56.722652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.603 qpair failed and we were unable to recover it. 00:28:17.603 [2024-07-24 22:15:56.732580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.603 [2024-07-24 22:15:56.732661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.603 [2024-07-24 22:15:56.732678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.603 [2024-07-24 22:15:56.732689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.603 [2024-07-24 22:15:56.732697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.603 [2024-07-24 22:15:56.732719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.603 qpair failed and we were unable to recover it. 00:28:17.603 [2024-07-24 22:15:56.742565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.603 [2024-07-24 22:15:56.742652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.603 [2024-07-24 22:15:56.742669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.603 [2024-07-24 22:15:56.742679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.603 [2024-07-24 22:15:56.742688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.603 [2024-07-24 22:15:56.742706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.603 qpair failed and we were unable to recover it. 00:28:17.603 [2024-07-24 22:15:56.752633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.603 [2024-07-24 22:15:56.752712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.603 [2024-07-24 22:15:56.752733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.603 [2024-07-24 22:15:56.752743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.603 [2024-07-24 22:15:56.752752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.603 [2024-07-24 22:15:56.752769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.603 qpair failed and we were unable to recover it. 00:28:17.603 [2024-07-24 22:15:56.762662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.603 [2024-07-24 22:15:56.762746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.603 [2024-07-24 22:15:56.762764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.603 [2024-07-24 22:15:56.762774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.603 [2024-07-24 22:15:56.762783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.603 [2024-07-24 22:15:56.762800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.603 qpair failed and we were unable to recover it. 00:28:17.603 [2024-07-24 22:15:56.772687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.603 [2024-07-24 22:15:56.772771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.603 [2024-07-24 22:15:56.772791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.603 [2024-07-24 22:15:56.772801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.603 [2024-07-24 22:15:56.772810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.603 [2024-07-24 22:15:56.772828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.603 qpair failed and we were unable to recover it. 00:28:17.603 [2024-07-24 22:15:56.782731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.603 [2024-07-24 22:15:56.782810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.603 [2024-07-24 22:15:56.782828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.603 [2024-07-24 22:15:56.782838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.603 [2024-07-24 22:15:56.782847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.603 [2024-07-24 22:15:56.782863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.603 qpair failed and we were unable to recover it. 00:28:17.603 [2024-07-24 22:15:56.792762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.603 [2024-07-24 22:15:56.792876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.603 [2024-07-24 22:15:56.792894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.603 [2024-07-24 22:15:56.792905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.603 [2024-07-24 22:15:56.792914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.603 [2024-07-24 22:15:56.792931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.603 qpair failed and we were unable to recover it. 00:28:17.603 [2024-07-24 22:15:56.802781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.603 [2024-07-24 22:15:56.802867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.603 [2024-07-24 22:15:56.802884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.603 [2024-07-24 22:15:56.802894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.603 [2024-07-24 22:15:56.802902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.603 [2024-07-24 22:15:56.802919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.603 qpair failed and we were unable to recover it. 00:28:17.603 [2024-07-24 22:15:56.812819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.603 [2024-07-24 22:15:56.812895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.603 [2024-07-24 22:15:56.812914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.603 [2024-07-24 22:15:56.812924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.603 [2024-07-24 22:15:56.812933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.603 [2024-07-24 22:15:56.812953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.603 qpair failed and we were unable to recover it. 00:28:17.864 [2024-07-24 22:15:56.822843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.864 [2024-07-24 22:15:56.822924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.864 [2024-07-24 22:15:56.822941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.864 [2024-07-24 22:15:56.822951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.864 [2024-07-24 22:15:56.822960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.864 [2024-07-24 22:15:56.822977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.864 qpair failed and we were unable to recover it. 00:28:17.864 [2024-07-24 22:15:56.832857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.864 [2024-07-24 22:15:56.832942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.864 [2024-07-24 22:15:56.832959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.864 [2024-07-24 22:15:56.832969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.864 [2024-07-24 22:15:56.832978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.864 [2024-07-24 22:15:56.832994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.864 qpair failed and we were unable to recover it. 00:28:17.864 [2024-07-24 22:15:56.842888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.864 [2024-07-24 22:15:56.842960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.864 [2024-07-24 22:15:56.842977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.864 [2024-07-24 22:15:56.842987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.864 [2024-07-24 22:15:56.842995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.864 [2024-07-24 22:15:56.843012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.864 qpair failed and we were unable to recover it. 00:28:17.864 [2024-07-24 22:15:56.852900] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.864 [2024-07-24 22:15:56.852977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.864 [2024-07-24 22:15:56.852994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.864 [2024-07-24 22:15:56.853004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.864 [2024-07-24 22:15:56.853013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.864 [2024-07-24 22:15:56.853029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.864 qpair failed and we were unable to recover it. 00:28:17.864 [2024-07-24 22:15:56.862936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.864 [2024-07-24 22:15:56.863015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.864 [2024-07-24 22:15:56.863035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.864 [2024-07-24 22:15:56.863045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.864 [2024-07-24 22:15:56.863054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.864 [2024-07-24 22:15:56.863070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.864 qpair failed and we were unable to recover it. 00:28:17.865 [2024-07-24 22:15:56.872988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.865 [2024-07-24 22:15:56.873068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.865 [2024-07-24 22:15:56.873085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.865 [2024-07-24 22:15:56.873095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.865 [2024-07-24 22:15:56.873104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.865 [2024-07-24 22:15:56.873121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.865 qpair failed and we were unable to recover it. 00:28:17.865 [2024-07-24 22:15:56.883007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.865 [2024-07-24 22:15:56.883085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.865 [2024-07-24 22:15:56.883102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.865 [2024-07-24 22:15:56.883112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.865 [2024-07-24 22:15:56.883121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.865 [2024-07-24 22:15:56.883138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.865 qpair failed and we were unable to recover it. 00:28:17.865 [2024-07-24 22:15:56.893125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.865 [2024-07-24 22:15:56.893214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.865 [2024-07-24 22:15:56.893232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.865 [2024-07-24 22:15:56.893242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.865 [2024-07-24 22:15:56.893250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.865 [2024-07-24 22:15:56.893268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.865 qpair failed and we were unable to recover it. 00:28:17.865 [2024-07-24 22:15:56.903058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.865 [2024-07-24 22:15:56.903182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.865 [2024-07-24 22:15:56.903201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.865 [2024-07-24 22:15:56.903211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.865 [2024-07-24 22:15:56.903220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.865 [2024-07-24 22:15:56.903240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.865 qpair failed and we were unable to recover it. 00:28:17.865 [2024-07-24 22:15:56.913097] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.865 [2024-07-24 22:15:56.913225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.865 [2024-07-24 22:15:56.913243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.865 [2024-07-24 22:15:56.913253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.865 [2024-07-24 22:15:56.913262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.865 [2024-07-24 22:15:56.913279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.865 qpair failed and we were unable to recover it. 00:28:17.865 [2024-07-24 22:15:56.923113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.865 [2024-07-24 22:15:56.923274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.865 [2024-07-24 22:15:56.923293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.865 [2024-07-24 22:15:56.923302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.865 [2024-07-24 22:15:56.923311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.865 [2024-07-24 22:15:56.923329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.865 qpair failed and we were unable to recover it. 00:28:17.865 [2024-07-24 22:15:56.933149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.865 [2024-07-24 22:15:56.933222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.865 [2024-07-24 22:15:56.933239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.865 [2024-07-24 22:15:56.933249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.865 [2024-07-24 22:15:56.933258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.865 [2024-07-24 22:15:56.933274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.865 qpair failed and we were unable to recover it. 00:28:17.865 [2024-07-24 22:15:56.943168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.865 [2024-07-24 22:15:56.943330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.865 [2024-07-24 22:15:56.943349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.865 [2024-07-24 22:15:56.943358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.865 [2024-07-24 22:15:56.943367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.865 [2024-07-24 22:15:56.943384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.865 qpair failed and we were unable to recover it. 00:28:17.865 [2024-07-24 22:15:56.953189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.865 [2024-07-24 22:15:56.953268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.865 [2024-07-24 22:15:56.953289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.865 [2024-07-24 22:15:56.953299] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.865 [2024-07-24 22:15:56.953307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.865 [2024-07-24 22:15:56.953325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.865 qpair failed and we were unable to recover it. 00:28:17.865 [2024-07-24 22:15:56.963205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.865 [2024-07-24 22:15:56.963282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.865 [2024-07-24 22:15:56.963299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.865 [2024-07-24 22:15:56.963309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.865 [2024-07-24 22:15:56.963318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.865 [2024-07-24 22:15:56.963334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.865 qpair failed and we were unable to recover it. 00:28:17.865 [2024-07-24 22:15:56.973251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.865 [2024-07-24 22:15:56.973327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.865 [2024-07-24 22:15:56.973344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.865 [2024-07-24 22:15:56.973354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.865 [2024-07-24 22:15:56.973362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.865 [2024-07-24 22:15:56.973379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.865 qpair failed and we were unable to recover it. 00:28:17.865 [2024-07-24 22:15:56.983256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.865 [2024-07-24 22:15:56.983411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.865 [2024-07-24 22:15:56.983429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.865 [2024-07-24 22:15:56.983439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.865 [2024-07-24 22:15:56.983448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.865 [2024-07-24 22:15:56.983466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.865 qpair failed and we were unable to recover it. 00:28:17.865 [2024-07-24 22:15:56.993312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.865 [2024-07-24 22:15:56.993391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.865 [2024-07-24 22:15:56.993409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.865 [2024-07-24 22:15:56.993419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.865 [2024-07-24 22:15:56.993431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.866 [2024-07-24 22:15:56.993448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.866 qpair failed and we were unable to recover it. 00:28:17.866 [2024-07-24 22:15:57.003337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.866 [2024-07-24 22:15:57.003416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.866 [2024-07-24 22:15:57.003435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.866 [2024-07-24 22:15:57.003445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.866 [2024-07-24 22:15:57.003454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.866 [2024-07-24 22:15:57.003470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.866 qpair failed and we were unable to recover it. 00:28:17.866 [2024-07-24 22:15:57.013362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.866 [2024-07-24 22:15:57.013436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.866 [2024-07-24 22:15:57.013453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.866 [2024-07-24 22:15:57.013463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.866 [2024-07-24 22:15:57.013472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.866 [2024-07-24 22:15:57.013489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.866 qpair failed and we were unable to recover it. 00:28:17.866 [2024-07-24 22:15:57.023373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.866 [2024-07-24 22:15:57.023452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.866 [2024-07-24 22:15:57.023470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.866 [2024-07-24 22:15:57.023480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.866 [2024-07-24 22:15:57.023489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.866 [2024-07-24 22:15:57.023506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.866 qpair failed and we were unable to recover it. 00:28:17.866 [2024-07-24 22:15:57.033447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.866 [2024-07-24 22:15:57.033570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.866 [2024-07-24 22:15:57.033589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.866 [2024-07-24 22:15:57.033599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.866 [2024-07-24 22:15:57.033608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.866 [2024-07-24 22:15:57.033626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.866 qpair failed and we were unable to recover it. 00:28:17.866 [2024-07-24 22:15:57.043437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.866 [2024-07-24 22:15:57.043514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.866 [2024-07-24 22:15:57.043532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.866 [2024-07-24 22:15:57.043542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.866 [2024-07-24 22:15:57.043550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.866 [2024-07-24 22:15:57.043567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.866 qpair failed and we were unable to recover it. 00:28:17.866 [2024-07-24 22:15:57.053510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.866 [2024-07-24 22:15:57.053670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.866 [2024-07-24 22:15:57.053688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.866 [2024-07-24 22:15:57.053699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.866 [2024-07-24 22:15:57.053708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.866 [2024-07-24 22:15:57.053732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.866 qpair failed and we were unable to recover it. 00:28:17.866 [2024-07-24 22:15:57.063506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.866 [2024-07-24 22:15:57.063583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.866 [2024-07-24 22:15:57.063601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.866 [2024-07-24 22:15:57.063611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.866 [2024-07-24 22:15:57.063621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.866 [2024-07-24 22:15:57.063638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.866 qpair failed and we were unable to recover it. 00:28:17.866 [2024-07-24 22:15:57.073524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.866 [2024-07-24 22:15:57.073693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.866 [2024-07-24 22:15:57.073712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.866 [2024-07-24 22:15:57.073728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.866 [2024-07-24 22:15:57.073737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:17.866 [2024-07-24 22:15:57.073754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.866 qpair failed and we were unable to recover it. 00:28:18.127 [2024-07-24 22:15:57.083560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.127 [2024-07-24 22:15:57.083679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.127 [2024-07-24 22:15:57.083698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.127 [2024-07-24 22:15:57.083708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.127 [2024-07-24 22:15:57.083726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.127 [2024-07-24 22:15:57.083744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.127 qpair failed and we were unable to recover it. 00:28:18.127 [2024-07-24 22:15:57.093594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.127 [2024-07-24 22:15:57.093673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.127 [2024-07-24 22:15:57.093690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.127 [2024-07-24 22:15:57.093700] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.127 [2024-07-24 22:15:57.093709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.127 [2024-07-24 22:15:57.093730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.127 qpair failed and we were unable to recover it. 00:28:18.127 [2024-07-24 22:15:57.103623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.127 [2024-07-24 22:15:57.103699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.127 [2024-07-24 22:15:57.103720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.127 [2024-07-24 22:15:57.103731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.127 [2024-07-24 22:15:57.103739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.127 [2024-07-24 22:15:57.103756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.127 qpair failed and we were unable to recover it. 00:28:18.127 [2024-07-24 22:15:57.113650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.127 [2024-07-24 22:15:57.113729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.127 [2024-07-24 22:15:57.113747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.127 [2024-07-24 22:15:57.113757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.127 [2024-07-24 22:15:57.113765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.127 [2024-07-24 22:15:57.113783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.127 qpair failed and we were unable to recover it. 00:28:18.127 [2024-07-24 22:15:57.123649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.127 [2024-07-24 22:15:57.123731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.127 [2024-07-24 22:15:57.123749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.127 [2024-07-24 22:15:57.123759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.127 [2024-07-24 22:15:57.123768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.127 [2024-07-24 22:15:57.123785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.127 qpair failed and we were unable to recover it. 00:28:18.127 [2024-07-24 22:15:57.133624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.127 [2024-07-24 22:15:57.133701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.127 [2024-07-24 22:15:57.133725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.128 [2024-07-24 22:15:57.133735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.128 [2024-07-24 22:15:57.133743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.128 [2024-07-24 22:15:57.133760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.128 qpair failed and we were unable to recover it. 00:28:18.128 [2024-07-24 22:15:57.143749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.128 [2024-07-24 22:15:57.143829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.128 [2024-07-24 22:15:57.143846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.128 [2024-07-24 22:15:57.143856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.128 [2024-07-24 22:15:57.143865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.128 [2024-07-24 22:15:57.143882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.128 qpair failed and we were unable to recover it. 00:28:18.128 [2024-07-24 22:15:57.153751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.128 [2024-07-24 22:15:57.153828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.128 [2024-07-24 22:15:57.153845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.128 [2024-07-24 22:15:57.153855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.128 [2024-07-24 22:15:57.153863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.128 [2024-07-24 22:15:57.153880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.128 qpair failed and we were unable to recover it. 00:28:18.128 [2024-07-24 22:15:57.163782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.128 [2024-07-24 22:15:57.163863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.128 [2024-07-24 22:15:57.163880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.128 [2024-07-24 22:15:57.163890] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.128 [2024-07-24 22:15:57.163898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.128 [2024-07-24 22:15:57.163914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.128 qpair failed and we were unable to recover it. 00:28:18.128 [2024-07-24 22:15:57.173800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.128 [2024-07-24 22:15:57.173891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.128 [2024-07-24 22:15:57.173909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.128 [2024-07-24 22:15:57.173919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.128 [2024-07-24 22:15:57.173932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.128 [2024-07-24 22:15:57.173951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.128 qpair failed and we were unable to recover it. 00:28:18.128 [2024-07-24 22:15:57.183799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.128 [2024-07-24 22:15:57.183880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.128 [2024-07-24 22:15:57.183898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.128 [2024-07-24 22:15:57.183908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.128 [2024-07-24 22:15:57.183917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.128 [2024-07-24 22:15:57.183934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.128 qpair failed and we were unable to recover it. 00:28:18.128 [2024-07-24 22:15:57.193861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.128 [2024-07-24 22:15:57.193954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.128 [2024-07-24 22:15:57.193972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.128 [2024-07-24 22:15:57.193982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.128 [2024-07-24 22:15:57.193991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.128 [2024-07-24 22:15:57.194008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.128 qpair failed and we were unable to recover it. 00:28:18.128 [2024-07-24 22:15:57.203988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.128 [2024-07-24 22:15:57.204069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.128 [2024-07-24 22:15:57.204086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.128 [2024-07-24 22:15:57.204097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.128 [2024-07-24 22:15:57.204106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.128 [2024-07-24 22:15:57.204123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.128 qpair failed and we were unable to recover it. 00:28:18.128 [2024-07-24 22:15:57.213912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.128 [2024-07-24 22:15:57.213989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.128 [2024-07-24 22:15:57.214007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.128 [2024-07-24 22:15:57.214017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.128 [2024-07-24 22:15:57.214026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.128 [2024-07-24 22:15:57.214042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.128 qpair failed and we were unable to recover it. 00:28:18.128 [2024-07-24 22:15:57.224035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.128 [2024-07-24 22:15:57.224157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.128 [2024-07-24 22:15:57.224177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.128 [2024-07-24 22:15:57.224188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.128 [2024-07-24 22:15:57.224197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.128 [2024-07-24 22:15:57.224214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.128 qpair failed and we were unable to recover it. 00:28:18.128 [2024-07-24 22:15:57.233969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.128 [2024-07-24 22:15:57.234049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.128 [2024-07-24 22:15:57.234066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.128 [2024-07-24 22:15:57.234076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.128 [2024-07-24 22:15:57.234085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.128 [2024-07-24 22:15:57.234103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.128 qpair failed and we were unable to recover it. 00:28:18.128 [2024-07-24 22:15:57.243954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.128 [2024-07-24 22:15:57.244048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.128 [2024-07-24 22:15:57.244065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.128 [2024-07-24 22:15:57.244075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.128 [2024-07-24 22:15:57.244084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.128 [2024-07-24 22:15:57.244100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.128 qpair failed and we were unable to recover it. 00:28:18.128 [2024-07-24 22:15:57.253986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.128 [2024-07-24 22:15:57.254086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.128 [2024-07-24 22:15:57.254103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.128 [2024-07-24 22:15:57.254113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.128 [2024-07-24 22:15:57.254121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.128 [2024-07-24 22:15:57.254138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.128 qpair failed and we were unable to recover it. 00:28:18.128 [2024-07-24 22:15:57.264057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.128 [2024-07-24 22:15:57.264175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.128 [2024-07-24 22:15:57.264192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.128 [2024-07-24 22:15:57.264205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.128 [2024-07-24 22:15:57.264214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.129 [2024-07-24 22:15:57.264231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.129 qpair failed and we were unable to recover it. 00:28:18.129 [2024-07-24 22:15:57.274088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.129 [2024-07-24 22:15:57.274213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.129 [2024-07-24 22:15:57.274231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.129 [2024-07-24 22:15:57.274241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.129 [2024-07-24 22:15:57.274249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.129 [2024-07-24 22:15:57.274266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.129 qpair failed and we were unable to recover it. 00:28:18.129 [2024-07-24 22:15:57.284171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.129 [2024-07-24 22:15:57.284281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.129 [2024-07-24 22:15:57.284298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.129 [2024-07-24 22:15:57.284307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.129 [2024-07-24 22:15:57.284316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.129 [2024-07-24 22:15:57.284332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.129 qpair failed and we were unable to recover it. 00:28:18.129 [2024-07-24 22:15:57.294145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.129 [2024-07-24 22:15:57.294224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.129 [2024-07-24 22:15:57.294242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.129 [2024-07-24 22:15:57.294252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.129 [2024-07-24 22:15:57.294260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.129 [2024-07-24 22:15:57.294277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.129 qpair failed and we were unable to recover it. 00:28:18.129 [2024-07-24 22:15:57.304218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.129 [2024-07-24 22:15:57.304298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.129 [2024-07-24 22:15:57.304316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.129 [2024-07-24 22:15:57.304326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.129 [2024-07-24 22:15:57.304334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.129 [2024-07-24 22:15:57.304351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.129 qpair failed and we were unable to recover it. 00:28:18.129 [2024-07-24 22:15:57.314151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.129 [2024-07-24 22:15:57.314236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.129 [2024-07-24 22:15:57.314253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.129 [2024-07-24 22:15:57.314263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.129 [2024-07-24 22:15:57.314272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.129 [2024-07-24 22:15:57.314297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.129 qpair failed and we were unable to recover it. 00:28:18.129 [2024-07-24 22:15:57.324257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.129 [2024-07-24 22:15:57.324336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.129 [2024-07-24 22:15:57.324354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.129 [2024-07-24 22:15:57.324364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.129 [2024-07-24 22:15:57.324372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.129 [2024-07-24 22:15:57.324389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.129 qpair failed and we were unable to recover it. 00:28:18.129 [2024-07-24 22:15:57.334294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.129 [2024-07-24 22:15:57.334376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.129 [2024-07-24 22:15:57.334393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.129 [2024-07-24 22:15:57.334403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.129 [2024-07-24 22:15:57.334412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.129 [2024-07-24 22:15:57.334429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.129 qpair failed and we were unable to recover it. 00:28:18.390 [2024-07-24 22:15:57.344326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.390 [2024-07-24 22:15:57.344406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.390 [2024-07-24 22:15:57.344424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.390 [2024-07-24 22:15:57.344434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.390 [2024-07-24 22:15:57.344443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.390 [2024-07-24 22:15:57.344461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.390 qpair failed and we were unable to recover it. 00:28:18.390 [2024-07-24 22:15:57.354346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.390 [2024-07-24 22:15:57.354425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.390 [2024-07-24 22:15:57.354443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.390 [2024-07-24 22:15:57.354455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.390 [2024-07-24 22:15:57.354464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.390 [2024-07-24 22:15:57.354481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.390 qpair failed and we were unable to recover it. 00:28:18.390 [2024-07-24 22:15:57.364360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.390 [2024-07-24 22:15:57.364443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.390 [2024-07-24 22:15:57.364460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.390 [2024-07-24 22:15:57.364470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.390 [2024-07-24 22:15:57.364479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.390 [2024-07-24 22:15:57.364496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.390 qpair failed and we were unable to recover it. 00:28:18.390 [2024-07-24 22:15:57.374405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.390 [2024-07-24 22:15:57.374564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.390 [2024-07-24 22:15:57.374582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.390 [2024-07-24 22:15:57.374591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.390 [2024-07-24 22:15:57.374600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.390 [2024-07-24 22:15:57.374617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.390 qpair failed and we were unable to recover it. 00:28:18.390 [2024-07-24 22:15:57.384437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.390 [2024-07-24 22:15:57.384514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.390 [2024-07-24 22:15:57.384532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.390 [2024-07-24 22:15:57.384542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.390 [2024-07-24 22:15:57.384551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.390 [2024-07-24 22:15:57.384567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.390 qpair failed and we were unable to recover it. 00:28:18.390 [2024-07-24 22:15:57.394480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.390 [2024-07-24 22:15:57.394596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.390 [2024-07-24 22:15:57.394614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.390 [2024-07-24 22:15:57.394624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.390 [2024-07-24 22:15:57.394633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.390 [2024-07-24 22:15:57.394649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.390 qpair failed and we were unable to recover it. 00:28:18.390 [2024-07-24 22:15:57.404497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.390 [2024-07-24 22:15:57.404576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.390 [2024-07-24 22:15:57.404594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.390 [2024-07-24 22:15:57.404604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.390 [2024-07-24 22:15:57.404613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.391 [2024-07-24 22:15:57.404630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.391 qpair failed and we were unable to recover it. 00:28:18.391 [2024-07-24 22:15:57.414515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.391 [2024-07-24 22:15:57.414593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.391 [2024-07-24 22:15:57.414611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.391 [2024-07-24 22:15:57.414621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.391 [2024-07-24 22:15:57.414630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.391 [2024-07-24 22:15:57.414647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.391 qpair failed and we were unable to recover it. 00:28:18.391 [2024-07-24 22:15:57.424540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.391 [2024-07-24 22:15:57.424619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.391 [2024-07-24 22:15:57.424638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.391 [2024-07-24 22:15:57.424649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.391 [2024-07-24 22:15:57.424658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.391 [2024-07-24 22:15:57.424675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.391 qpair failed and we were unable to recover it. 00:28:18.391 [2024-07-24 22:15:57.434546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.391 [2024-07-24 22:15:57.434622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.391 [2024-07-24 22:15:57.434640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.391 [2024-07-24 22:15:57.434650] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.391 [2024-07-24 22:15:57.434659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.391 [2024-07-24 22:15:57.434676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.391 qpair failed and we were unable to recover it. 00:28:18.391 [2024-07-24 22:15:57.444595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.391 [2024-07-24 22:15:57.444674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.391 [2024-07-24 22:15:57.444695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.391 [2024-07-24 22:15:57.444705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.391 [2024-07-24 22:15:57.444718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.391 [2024-07-24 22:15:57.444736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.391 qpair failed and we were unable to recover it. 00:28:18.391 [2024-07-24 22:15:57.454587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.391 [2024-07-24 22:15:57.454673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.391 [2024-07-24 22:15:57.454692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.391 [2024-07-24 22:15:57.454702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.391 [2024-07-24 22:15:57.454711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.391 [2024-07-24 22:15:57.454733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.391 qpair failed and we were unable to recover it. 00:28:18.391 [2024-07-24 22:15:57.464694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.391 [2024-07-24 22:15:57.464797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.391 [2024-07-24 22:15:57.464815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.391 [2024-07-24 22:15:57.464825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.391 [2024-07-24 22:15:57.464833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.391 [2024-07-24 22:15:57.464851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.391 qpair failed and we were unable to recover it. 00:28:18.391 [2024-07-24 22:15:57.474670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.391 [2024-07-24 22:15:57.474756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.391 [2024-07-24 22:15:57.474774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.391 [2024-07-24 22:15:57.474784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.391 [2024-07-24 22:15:57.474793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.391 [2024-07-24 22:15:57.474809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.391 qpair failed and we were unable to recover it. 00:28:18.391 [2024-07-24 22:15:57.484695] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.391 [2024-07-24 22:15:57.484819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.391 [2024-07-24 22:15:57.484836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.391 [2024-07-24 22:15:57.484847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.391 [2024-07-24 22:15:57.484855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.391 [2024-07-24 22:15:57.484872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.391 qpair failed and we were unable to recover it. 00:28:18.391 [2024-07-24 22:15:57.494741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.391 [2024-07-24 22:15:57.494824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.391 [2024-07-24 22:15:57.494842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.391 [2024-07-24 22:15:57.494852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.391 [2024-07-24 22:15:57.494860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.391 [2024-07-24 22:15:57.494877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.391 qpair failed and we were unable to recover it. 00:28:18.391 [2024-07-24 22:15:57.504771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.391 [2024-07-24 22:15:57.504853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.391 [2024-07-24 22:15:57.504871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.391 [2024-07-24 22:15:57.504881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.391 [2024-07-24 22:15:57.504890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.391 [2024-07-24 22:15:57.504907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.391 qpair failed and we were unable to recover it. 00:28:18.391 [2024-07-24 22:15:57.514779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.391 [2024-07-24 22:15:57.514859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.391 [2024-07-24 22:15:57.514877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.391 [2024-07-24 22:15:57.514888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.391 [2024-07-24 22:15:57.514897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.391 [2024-07-24 22:15:57.514915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.391 qpair failed and we were unable to recover it. 00:28:18.391 [2024-07-24 22:15:57.524873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.391 [2024-07-24 22:15:57.524982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.391 [2024-07-24 22:15:57.525000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.391 [2024-07-24 22:15:57.525009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.391 [2024-07-24 22:15:57.525018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.391 [2024-07-24 22:15:57.525035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.391 qpair failed and we were unable to recover it. 00:28:18.391 [2024-07-24 22:15:57.534848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.391 [2024-07-24 22:15:57.534947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.391 [2024-07-24 22:15:57.534969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.391 [2024-07-24 22:15:57.534979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.391 [2024-07-24 22:15:57.534988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.391 [2024-07-24 22:15:57.535005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.391 qpair failed and we were unable to recover it. 00:28:18.391 [2024-07-24 22:15:57.544869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.392 [2024-07-24 22:15:57.544957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.392 [2024-07-24 22:15:57.544974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.392 [2024-07-24 22:15:57.544984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.392 [2024-07-24 22:15:57.544993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.392 [2024-07-24 22:15:57.545010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.392 qpair failed and we were unable to recover it. 00:28:18.392 [2024-07-24 22:15:57.554912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.392 [2024-07-24 22:15:57.554990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.392 [2024-07-24 22:15:57.555008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.392 [2024-07-24 22:15:57.555018] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.392 [2024-07-24 22:15:57.555026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.392 [2024-07-24 22:15:57.555043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.392 qpair failed and we were unable to recover it. 00:28:18.392 [2024-07-24 22:15:57.564945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.392 [2024-07-24 22:15:57.565020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.392 [2024-07-24 22:15:57.565038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.392 [2024-07-24 22:15:57.565048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.392 [2024-07-24 22:15:57.565056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.392 [2024-07-24 22:15:57.565073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.392 qpair failed and we were unable to recover it. 00:28:18.392 [2024-07-24 22:15:57.574952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.392 [2024-07-24 22:15:57.575023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.392 [2024-07-24 22:15:57.575041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.392 [2024-07-24 22:15:57.575051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.392 [2024-07-24 22:15:57.575059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.392 [2024-07-24 22:15:57.575079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.392 qpair failed and we were unable to recover it. 00:28:18.392 [2024-07-24 22:15:57.584992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.392 [2024-07-24 22:15:57.585071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.392 [2024-07-24 22:15:57.585088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.392 [2024-07-24 22:15:57.585099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.392 [2024-07-24 22:15:57.585107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.392 [2024-07-24 22:15:57.585124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.392 qpair failed and we were unable to recover it. 00:28:18.392 [2024-07-24 22:15:57.594950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.392 [2024-07-24 22:15:57.595040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.392 [2024-07-24 22:15:57.595057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.392 [2024-07-24 22:15:57.595067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.392 [2024-07-24 22:15:57.595075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.392 [2024-07-24 22:15:57.595092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.392 qpair failed and we were unable to recover it. 00:28:18.653 [2024-07-24 22:15:57.605069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.653 [2024-07-24 22:15:57.605151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.653 [2024-07-24 22:15:57.605168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.653 [2024-07-24 22:15:57.605178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.653 [2024-07-24 22:15:57.605186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.653 [2024-07-24 22:15:57.605203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-07-24 22:15:57.615005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.653 [2024-07-24 22:15:57.615081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.653 [2024-07-24 22:15:57.615099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.653 [2024-07-24 22:15:57.615108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.653 [2024-07-24 22:15:57.615117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.653 [2024-07-24 22:15:57.615134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-07-24 22:15:57.625109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.653 [2024-07-24 22:15:57.625189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.653 [2024-07-24 22:15:57.625210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.653 [2024-07-24 22:15:57.625219] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.653 [2024-07-24 22:15:57.625228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.653 [2024-07-24 22:15:57.625244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-07-24 22:15:57.635131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.653 [2024-07-24 22:15:57.635212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.653 [2024-07-24 22:15:57.635230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.653 [2024-07-24 22:15:57.635239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.653 [2024-07-24 22:15:57.635248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.653 [2024-07-24 22:15:57.635264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-07-24 22:15:57.645170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.653 [2024-07-24 22:15:57.645244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.653 [2024-07-24 22:15:57.645261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.653 [2024-07-24 22:15:57.645271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.653 [2024-07-24 22:15:57.645280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.653 [2024-07-24 22:15:57.645296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-07-24 22:15:57.655175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.653 [2024-07-24 22:15:57.655268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.653 [2024-07-24 22:15:57.655288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.653 [2024-07-24 22:15:57.655298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.653 [2024-07-24 22:15:57.655306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.653 [2024-07-24 22:15:57.655325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-07-24 22:15:57.665214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.653 [2024-07-24 22:15:57.665291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.653 [2024-07-24 22:15:57.665309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.653 [2024-07-24 22:15:57.665318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.653 [2024-07-24 22:15:57.665327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.653 [2024-07-24 22:15:57.665348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-07-24 22:15:57.675234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.653 [2024-07-24 22:15:57.675317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.653 [2024-07-24 22:15:57.675334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.653 [2024-07-24 22:15:57.675344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.653 [2024-07-24 22:15:57.675353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.653 [2024-07-24 22:15:57.675369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-07-24 22:15:57.685280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.653 [2024-07-24 22:15:57.685365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.653 [2024-07-24 22:15:57.685382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.654 [2024-07-24 22:15:57.685392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.654 [2024-07-24 22:15:57.685400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.654 [2024-07-24 22:15:57.685417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-07-24 22:15:57.695330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.654 [2024-07-24 22:15:57.695412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.654 [2024-07-24 22:15:57.695429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.654 [2024-07-24 22:15:57.695439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.654 [2024-07-24 22:15:57.695447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.654 [2024-07-24 22:15:57.695464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-07-24 22:15:57.705257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.654 [2024-07-24 22:15:57.705337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.654 [2024-07-24 22:15:57.705355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.654 [2024-07-24 22:15:57.705364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.654 [2024-07-24 22:15:57.705373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.654 [2024-07-24 22:15:57.705390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-07-24 22:15:57.715369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.654 [2024-07-24 22:15:57.715448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.654 [2024-07-24 22:15:57.715471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.654 [2024-07-24 22:15:57.715481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.654 [2024-07-24 22:15:57.715490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.654 [2024-07-24 22:15:57.715507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-07-24 22:15:57.725318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.654 [2024-07-24 22:15:57.725393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.654 [2024-07-24 22:15:57.725412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.654 [2024-07-24 22:15:57.725423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.654 [2024-07-24 22:15:57.725432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.654 [2024-07-24 22:15:57.725449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-07-24 22:15:57.735399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.654 [2024-07-24 22:15:57.735486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.654 [2024-07-24 22:15:57.735504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.654 [2024-07-24 22:15:57.735514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.654 [2024-07-24 22:15:57.735523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.654 [2024-07-24 22:15:57.735539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-07-24 22:15:57.745466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.654 [2024-07-24 22:15:57.745546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.654 [2024-07-24 22:15:57.745563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.654 [2024-07-24 22:15:57.745574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.654 [2024-07-24 22:15:57.745583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.654 [2024-07-24 22:15:57.745600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-07-24 22:15:57.755408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.654 [2024-07-24 22:15:57.755491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.654 [2024-07-24 22:15:57.755511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.654 [2024-07-24 22:15:57.755522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.654 [2024-07-24 22:15:57.755534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.654 [2024-07-24 22:15:57.755551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-07-24 22:15:57.765429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.654 [2024-07-24 22:15:57.765505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.654 [2024-07-24 22:15:57.765524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.654 [2024-07-24 22:15:57.765535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.654 [2024-07-24 22:15:57.765543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.654 [2024-07-24 22:15:57.765560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-07-24 22:15:57.775533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.654 [2024-07-24 22:15:57.775613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.654 [2024-07-24 22:15:57.775631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.654 [2024-07-24 22:15:57.775641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.654 [2024-07-24 22:15:57.775650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.654 [2024-07-24 22:15:57.775667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-07-24 22:15:57.785580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.654 [2024-07-24 22:15:57.785659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.654 [2024-07-24 22:15:57.785677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.654 [2024-07-24 22:15:57.785687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.654 [2024-07-24 22:15:57.785696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.654 [2024-07-24 22:15:57.785712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-07-24 22:15:57.795586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.654 [2024-07-24 22:15:57.795701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.654 [2024-07-24 22:15:57.795726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.654 [2024-07-24 22:15:57.795736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.654 [2024-07-24 22:15:57.795745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.654 [2024-07-24 22:15:57.795762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-07-24 22:15:57.805640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.654 [2024-07-24 22:15:57.805835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.654 [2024-07-24 22:15:57.805855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.654 [2024-07-24 22:15:57.805866] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.654 [2024-07-24 22:15:57.805875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.654 [2024-07-24 22:15:57.805892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-07-24 22:15:57.815646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.654 [2024-07-24 22:15:57.815728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.654 [2024-07-24 22:15:57.815746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.654 [2024-07-24 22:15:57.815755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.654 [2024-07-24 22:15:57.815764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.655 [2024-07-24 22:15:57.815781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-07-24 22:15:57.825660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.655 [2024-07-24 22:15:57.825820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.655 [2024-07-24 22:15:57.825838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.655 [2024-07-24 22:15:57.825847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.655 [2024-07-24 22:15:57.825856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.655 [2024-07-24 22:15:57.825873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-07-24 22:15:57.835696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.655 [2024-07-24 22:15:57.835785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.655 [2024-07-24 22:15:57.835803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.655 [2024-07-24 22:15:57.835813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.655 [2024-07-24 22:15:57.835821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.655 [2024-07-24 22:15:57.835838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-07-24 22:15:57.845646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.655 [2024-07-24 22:15:57.845728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.655 [2024-07-24 22:15:57.845746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.655 [2024-07-24 22:15:57.845756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.655 [2024-07-24 22:15:57.845768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.655 [2024-07-24 22:15:57.845786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-07-24 22:15:57.855756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.655 [2024-07-24 22:15:57.855883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.655 [2024-07-24 22:15:57.855901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.655 [2024-07-24 22:15:57.855911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.655 [2024-07-24 22:15:57.855920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.655 [2024-07-24 22:15:57.855937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.915 [2024-07-24 22:15:57.865774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.915 [2024-07-24 22:15:57.865876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.915 [2024-07-24 22:15:57.865894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.915 [2024-07-24 22:15:57.865904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.915 [2024-07-24 22:15:57.865913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.915 [2024-07-24 22:15:57.865930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.915 qpair failed and we were unable to recover it. 00:28:18.915 [2024-07-24 22:15:57.875831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.915 [2024-07-24 22:15:57.875914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.915 [2024-07-24 22:15:57.875932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.915 [2024-07-24 22:15:57.875942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.915 [2024-07-24 22:15:57.875950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.915 [2024-07-24 22:15:57.875968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.916 qpair failed and we were unable to recover it. 00:28:18.916 [2024-07-24 22:15:57.885789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.916 [2024-07-24 22:15:57.885865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.916 [2024-07-24 22:15:57.885883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.916 [2024-07-24 22:15:57.885893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.916 [2024-07-24 22:15:57.885901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.916 [2024-07-24 22:15:57.885918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.916 qpair failed and we were unable to recover it. 00:28:18.916 [2024-07-24 22:15:57.895836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.916 [2024-07-24 22:15:57.896039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.916 [2024-07-24 22:15:57.896059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.916 [2024-07-24 22:15:57.896069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.916 [2024-07-24 22:15:57.896078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.916 [2024-07-24 22:15:57.896095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.916 qpair failed and we were unable to recover it. 00:28:18.916 [2024-07-24 22:15:57.905887] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.916 [2024-07-24 22:15:57.905965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.916 [2024-07-24 22:15:57.905983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.916 [2024-07-24 22:15:57.905993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.916 [2024-07-24 22:15:57.906002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.916 [2024-07-24 22:15:57.906018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.916 qpair failed and we were unable to recover it. 00:28:18.916 [2024-07-24 22:15:57.915936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.916 [2024-07-24 22:15:57.916021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.916 [2024-07-24 22:15:57.916039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.916 [2024-07-24 22:15:57.916049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.916 [2024-07-24 22:15:57.916058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.916 [2024-07-24 22:15:57.916075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.916 qpair failed and we were unable to recover it. 00:28:18.916 [2024-07-24 22:15:57.926009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.916 [2024-07-24 22:15:57.926122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.916 [2024-07-24 22:15:57.926140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.916 [2024-07-24 22:15:57.926150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.916 [2024-07-24 22:15:57.926159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.916 [2024-07-24 22:15:57.926176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.916 qpair failed and we were unable to recover it. 00:28:18.916 [2024-07-24 22:15:57.935979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.916 [2024-07-24 22:15:57.936055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.916 [2024-07-24 22:15:57.936073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.916 [2024-07-24 22:15:57.936083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.916 [2024-07-24 22:15:57.936095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.916 [2024-07-24 22:15:57.936111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.916 qpair failed and we were unable to recover it. 00:28:18.916 [2024-07-24 22:15:57.946031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.916 [2024-07-24 22:15:57.946111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.916 [2024-07-24 22:15:57.946130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.916 [2024-07-24 22:15:57.946141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.916 [2024-07-24 22:15:57.946149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.916 [2024-07-24 22:15:57.946166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.916 qpair failed and we were unable to recover it. 00:28:18.916 [2024-07-24 22:15:57.956040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.916 [2024-07-24 22:15:57.956201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.916 [2024-07-24 22:15:57.956219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.916 [2024-07-24 22:15:57.956229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.916 [2024-07-24 22:15:57.956238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.916 [2024-07-24 22:15:57.956255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.916 qpair failed and we were unable to recover it. 00:28:18.916 [2024-07-24 22:15:57.966000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.916 [2024-07-24 22:15:57.966077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.916 [2024-07-24 22:15:57.966095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.916 [2024-07-24 22:15:57.966105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.916 [2024-07-24 22:15:57.966114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.916 [2024-07-24 22:15:57.966131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.916 qpair failed and we were unable to recover it. 00:28:18.916 [2024-07-24 22:15:57.976078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.916 [2024-07-24 22:15:57.976154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.916 [2024-07-24 22:15:57.976172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.916 [2024-07-24 22:15:57.976182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.916 [2024-07-24 22:15:57.976191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.916 [2024-07-24 22:15:57.976208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.916 qpair failed and we were unable to recover it. 00:28:18.916 [2024-07-24 22:15:57.986111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.916 [2024-07-24 22:15:57.986193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.916 [2024-07-24 22:15:57.986212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.916 [2024-07-24 22:15:57.986222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.916 [2024-07-24 22:15:57.986230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.916 [2024-07-24 22:15:57.986247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.916 qpair failed and we were unable to recover it. 00:28:18.916 [2024-07-24 22:15:57.996083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.916 [2024-07-24 22:15:57.996169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.916 [2024-07-24 22:15:57.996187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.916 [2024-07-24 22:15:57.996197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.916 [2024-07-24 22:15:57.996206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.916 [2024-07-24 22:15:57.996223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.916 qpair failed and we were unable to recover it. 00:28:18.916 [2024-07-24 22:15:58.006112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.916 [2024-07-24 22:15:58.006192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.916 [2024-07-24 22:15:58.006209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.916 [2024-07-24 22:15:58.006219] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.916 [2024-07-24 22:15:58.006228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.916 [2024-07-24 22:15:58.006245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.916 qpair failed and we were unable to recover it. 00:28:18.916 [2024-07-24 22:15:58.016190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.917 [2024-07-24 22:15:58.016270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.917 [2024-07-24 22:15:58.016288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.917 [2024-07-24 22:15:58.016299] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.917 [2024-07-24 22:15:58.016307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.917 [2024-07-24 22:15:58.016324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.917 qpair failed and we were unable to recover it. 00:28:18.917 [2024-07-24 22:15:58.026171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.917 [2024-07-24 22:15:58.026248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.917 [2024-07-24 22:15:58.026266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.917 [2024-07-24 22:15:58.026279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.917 [2024-07-24 22:15:58.026288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.917 [2024-07-24 22:15:58.026304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.917 qpair failed and we were unable to recover it. 00:28:18.917 [2024-07-24 22:15:58.036253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.917 [2024-07-24 22:15:58.036340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.917 [2024-07-24 22:15:58.036358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.917 [2024-07-24 22:15:58.036368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.917 [2024-07-24 22:15:58.036376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.917 [2024-07-24 22:15:58.036393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.917 qpair failed and we were unable to recover it. 00:28:18.917 [2024-07-24 22:15:58.046291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.917 [2024-07-24 22:15:58.046367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.917 [2024-07-24 22:15:58.046384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.917 [2024-07-24 22:15:58.046395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.917 [2024-07-24 22:15:58.046403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.917 [2024-07-24 22:15:58.046420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.917 qpair failed and we were unable to recover it. 00:28:18.917 [2024-07-24 22:15:58.056317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.917 [2024-07-24 22:15:58.056403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.917 [2024-07-24 22:15:58.056421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.917 [2024-07-24 22:15:58.056431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.917 [2024-07-24 22:15:58.056440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.917 [2024-07-24 22:15:58.056456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.917 qpair failed and we were unable to recover it. 00:28:18.917 [2024-07-24 22:15:58.066286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.917 [2024-07-24 22:15:58.066366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.917 [2024-07-24 22:15:58.066384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.917 [2024-07-24 22:15:58.066395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.917 [2024-07-24 22:15:58.066403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.917 [2024-07-24 22:15:58.066420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.917 qpair failed and we were unable to recover it. 00:28:18.917 [2024-07-24 22:15:58.076313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.917 [2024-07-24 22:15:58.076396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.917 [2024-07-24 22:15:58.076415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.917 [2024-07-24 22:15:58.076425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.917 [2024-07-24 22:15:58.076433] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.917 [2024-07-24 22:15:58.076452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.917 qpair failed and we were unable to recover it. 00:28:18.917 [2024-07-24 22:15:58.086411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.917 [2024-07-24 22:15:58.086491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.917 [2024-07-24 22:15:58.086509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.917 [2024-07-24 22:15:58.086519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.917 [2024-07-24 22:15:58.086528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.917 [2024-07-24 22:15:58.086544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.917 qpair failed and we were unable to recover it. 00:28:18.917 [2024-07-24 22:15:58.096436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.917 [2024-07-24 22:15:58.096550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.917 [2024-07-24 22:15:58.096568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.917 [2024-07-24 22:15:58.096578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.917 [2024-07-24 22:15:58.096587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.917 [2024-07-24 22:15:58.096604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.917 qpair failed and we were unable to recover it. 00:28:18.917 [2024-07-24 22:15:58.106468] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.917 [2024-07-24 22:15:58.106547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.917 [2024-07-24 22:15:58.106565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.917 [2024-07-24 22:15:58.106576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.917 [2024-07-24 22:15:58.106584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.917 [2024-07-24 22:15:58.106601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.917 qpair failed and we were unable to recover it. 00:28:18.917 [2024-07-24 22:15:58.116509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.917 [2024-07-24 22:15:58.116589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.917 [2024-07-24 22:15:58.116609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.917 [2024-07-24 22:15:58.116622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.917 [2024-07-24 22:15:58.116630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.917 [2024-07-24 22:15:58.116648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.917 qpair failed and we were unable to recover it. 00:28:18.917 [2024-07-24 22:15:58.126505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.917 [2024-07-24 22:15:58.126634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.917 [2024-07-24 22:15:58.126653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.917 [2024-07-24 22:15:58.126663] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.917 [2024-07-24 22:15:58.126672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:18.917 [2024-07-24 22:15:58.126690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.917 qpair failed and we were unable to recover it. 00:28:19.179 [2024-07-24 22:15:58.136480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.179 [2024-07-24 22:15:58.136559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.179 [2024-07-24 22:15:58.136577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.179 [2024-07-24 22:15:58.136587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.179 [2024-07-24 22:15:58.136596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.179 [2024-07-24 22:15:58.136613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.179 qpair failed and we were unable to recover it. 00:28:19.179 [2024-07-24 22:15:58.146524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.179 [2024-07-24 22:15:58.146613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.179 [2024-07-24 22:15:58.146631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.179 [2024-07-24 22:15:58.146641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.179 [2024-07-24 22:15:58.146649] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.179 [2024-07-24 22:15:58.146667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.179 qpair failed and we were unable to recover it. 00:28:19.179 [2024-07-24 22:15:58.156543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.179 [2024-07-24 22:15:58.156653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.179 [2024-07-24 22:15:58.156671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.179 [2024-07-24 22:15:58.156681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.179 [2024-07-24 22:15:58.156690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.179 [2024-07-24 22:15:58.156707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.179 qpair failed and we were unable to recover it. 00:28:19.179 [2024-07-24 22:15:58.166604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.179 [2024-07-24 22:15:58.166696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.179 [2024-07-24 22:15:58.166719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.179 [2024-07-24 22:15:58.166729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.179 [2024-07-24 22:15:58.166738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.179 [2024-07-24 22:15:58.166755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.179 qpair failed and we were unable to recover it. 00:28:19.179 [2024-07-24 22:15:58.176589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.179 [2024-07-24 22:15:58.176676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.179 [2024-07-24 22:15:58.176694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.179 [2024-07-24 22:15:58.176704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.179 [2024-07-24 22:15:58.176713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.179 [2024-07-24 22:15:58.176736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.179 qpair failed and we were unable to recover it. 00:28:19.179 [2024-07-24 22:15:58.186741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.179 [2024-07-24 22:15:58.186847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.179 [2024-07-24 22:15:58.186865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.179 [2024-07-24 22:15:58.186876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.179 [2024-07-24 22:15:58.186885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.179 [2024-07-24 22:15:58.186901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.179 qpair failed and we were unable to recover it. 00:28:19.179 [2024-07-24 22:15:58.196737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.179 [2024-07-24 22:15:58.196845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.179 [2024-07-24 22:15:58.196862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.179 [2024-07-24 22:15:58.196872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.179 [2024-07-24 22:15:58.196881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.179 [2024-07-24 22:15:58.196898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.179 qpair failed and we were unable to recover it. 00:28:19.179 [2024-07-24 22:15:58.206747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.179 [2024-07-24 22:15:58.206822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.179 [2024-07-24 22:15:58.206840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.179 [2024-07-24 22:15:58.206852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.179 [2024-07-24 22:15:58.206861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.179 [2024-07-24 22:15:58.206878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.179 qpair failed and we were unable to recover it. 00:28:19.179 [2024-07-24 22:15:58.216768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.179 [2024-07-24 22:15:58.216893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.179 [2024-07-24 22:15:58.216913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.179 [2024-07-24 22:15:58.216923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.179 [2024-07-24 22:15:58.216932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.179 [2024-07-24 22:15:58.216950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.179 qpair failed and we were unable to recover it. 00:28:19.179 [2024-07-24 22:15:58.226822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.179 [2024-07-24 22:15:58.226901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.179 [2024-07-24 22:15:58.226919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.179 [2024-07-24 22:15:58.226930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.179 [2024-07-24 22:15:58.226938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.179 [2024-07-24 22:15:58.226955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.179 qpair failed and we were unable to recover it. 00:28:19.179 [2024-07-24 22:15:58.236828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.179 [2024-07-24 22:15:58.236909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.179 [2024-07-24 22:15:58.236926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.179 [2024-07-24 22:15:58.236936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.179 [2024-07-24 22:15:58.236945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.179 [2024-07-24 22:15:58.236962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.179 qpair failed and we were unable to recover it. 00:28:19.179 [2024-07-24 22:15:58.246805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.180 [2024-07-24 22:15:58.246887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.180 [2024-07-24 22:15:58.246905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.180 [2024-07-24 22:15:58.246915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.180 [2024-07-24 22:15:58.246924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.180 [2024-07-24 22:15:58.246941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.180 qpair failed and we were unable to recover it. 00:28:19.180 [2024-07-24 22:15:58.256830] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.180 [2024-07-24 22:15:58.256909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.180 [2024-07-24 22:15:58.256927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.180 [2024-07-24 22:15:58.256937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.180 [2024-07-24 22:15:58.256945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.180 [2024-07-24 22:15:58.256963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.180 qpair failed and we were unable to recover it. 00:28:19.180 [2024-07-24 22:15:58.266876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.180 [2024-07-24 22:15:58.266955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.180 [2024-07-24 22:15:58.266973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.180 [2024-07-24 22:15:58.266983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.180 [2024-07-24 22:15:58.266992] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.180 [2024-07-24 22:15:58.267009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.180 qpair failed and we were unable to recover it. 00:28:19.180 [2024-07-24 22:15:58.276881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.180 [2024-07-24 22:15:58.276960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.180 [2024-07-24 22:15:58.276978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.180 [2024-07-24 22:15:58.276988] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.180 [2024-07-24 22:15:58.276997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.180 [2024-07-24 22:15:58.277015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.180 qpair failed and we were unable to recover it. 00:28:19.180 [2024-07-24 22:15:58.286917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.180 [2024-07-24 22:15:58.286996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.180 [2024-07-24 22:15:58.287014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.180 [2024-07-24 22:15:58.287024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.180 [2024-07-24 22:15:58.287034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.180 [2024-07-24 22:15:58.287050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.180 qpair failed and we were unable to recover it. 00:28:19.180 [2024-07-24 22:15:58.296928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.180 [2024-07-24 22:15:58.297008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.180 [2024-07-24 22:15:58.297029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.180 [2024-07-24 22:15:58.297039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.180 [2024-07-24 22:15:58.297048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.180 [2024-07-24 22:15:58.297065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.180 qpair failed and we were unable to recover it. 00:28:19.180 [2024-07-24 22:15:58.307019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.180 [2024-07-24 22:15:58.307101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.180 [2024-07-24 22:15:58.307119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.180 [2024-07-24 22:15:58.307129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.180 [2024-07-24 22:15:58.307137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.180 [2024-07-24 22:15:58.307155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.180 qpair failed and we were unable to recover it. 00:28:19.180 [2024-07-24 22:15:58.317006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.180 [2024-07-24 22:15:58.317093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.180 [2024-07-24 22:15:58.317111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.180 [2024-07-24 22:15:58.317121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.180 [2024-07-24 22:15:58.317130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.180 [2024-07-24 22:15:58.317147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.180 qpair failed and we were unable to recover it. 00:28:19.180 [2024-07-24 22:15:58.327029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.180 [2024-07-24 22:15:58.327117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.180 [2024-07-24 22:15:58.327136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.180 [2024-07-24 22:15:58.327146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.180 [2024-07-24 22:15:58.327155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.180 [2024-07-24 22:15:58.327172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.180 qpair failed and we were unable to recover it. 00:28:19.180 [2024-07-24 22:15:58.337158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.180 [2024-07-24 22:15:58.337235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.180 [2024-07-24 22:15:58.337254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.180 [2024-07-24 22:15:58.337264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.180 [2024-07-24 22:15:58.337274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.180 [2024-07-24 22:15:58.337294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.180 qpair failed and we were unable to recover it. 00:28:19.180 [2024-07-24 22:15:58.347153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.180 [2024-07-24 22:15:58.347263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.180 [2024-07-24 22:15:58.347282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.180 [2024-07-24 22:15:58.347293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.180 [2024-07-24 22:15:58.347302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.180 [2024-07-24 22:15:58.347319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.180 qpair failed and we were unable to recover it. 00:28:19.180 [2024-07-24 22:15:58.357185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.180 [2024-07-24 22:15:58.357265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.180 [2024-07-24 22:15:58.357283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.180 [2024-07-24 22:15:58.357294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.180 [2024-07-24 22:15:58.357303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.180 [2024-07-24 22:15:58.357320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.180 qpair failed and we were unable to recover it. 00:28:19.180 [2024-07-24 22:15:58.367212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.180 [2024-07-24 22:15:58.367289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.180 [2024-07-24 22:15:58.367307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.180 [2024-07-24 22:15:58.367317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.180 [2024-07-24 22:15:58.367326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.180 [2024-07-24 22:15:58.367342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.180 qpair failed and we were unable to recover it. 00:28:19.180 [2024-07-24 22:15:58.377263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.180 [2024-07-24 22:15:58.377340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.180 [2024-07-24 22:15:58.377357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.181 [2024-07-24 22:15:58.377367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.181 [2024-07-24 22:15:58.377376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.181 [2024-07-24 22:15:58.377393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.181 qpair failed and we were unable to recover it. 00:28:19.181 [2024-07-24 22:15:58.387271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.181 [2024-07-24 22:15:58.387351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.181 [2024-07-24 22:15:58.387372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.181 [2024-07-24 22:15:58.387382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.181 [2024-07-24 22:15:58.387391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.181 [2024-07-24 22:15:58.387408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.181 qpair failed and we were unable to recover it. 00:28:19.441 [2024-07-24 22:15:58.397292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.441 [2024-07-24 22:15:58.397376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.441 [2024-07-24 22:15:58.397393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.442 [2024-07-24 22:15:58.397404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.442 [2024-07-24 22:15:58.397413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.442 [2024-07-24 22:15:58.397430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-07-24 22:15:58.407330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.442 [2024-07-24 22:15:58.407403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.442 [2024-07-24 22:15:58.407420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.442 [2024-07-24 22:15:58.407430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.442 [2024-07-24 22:15:58.407439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.442 [2024-07-24 22:15:58.407456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-07-24 22:15:58.417392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.442 [2024-07-24 22:15:58.417471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.442 [2024-07-24 22:15:58.417489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.442 [2024-07-24 22:15:58.417499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.442 [2024-07-24 22:15:58.417508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.442 [2024-07-24 22:15:58.417525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-07-24 22:15:58.427399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.442 [2024-07-24 22:15:58.427473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.442 [2024-07-24 22:15:58.427491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.442 [2024-07-24 22:15:58.427501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.442 [2024-07-24 22:15:58.427510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.442 [2024-07-24 22:15:58.427530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-07-24 22:15:58.437413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.442 [2024-07-24 22:15:58.437506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.442 [2024-07-24 22:15:58.437523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.442 [2024-07-24 22:15:58.437533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.442 [2024-07-24 22:15:58.437541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.442 [2024-07-24 22:15:58.437559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-07-24 22:15:58.447432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.442 [2024-07-24 22:15:58.447513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.442 [2024-07-24 22:15:58.447532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.442 [2024-07-24 22:15:58.447541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.442 [2024-07-24 22:15:58.447550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.442 [2024-07-24 22:15:58.447567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-07-24 22:15:58.457443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.442 [2024-07-24 22:15:58.457600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.442 [2024-07-24 22:15:58.457617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.442 [2024-07-24 22:15:58.457627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.442 [2024-07-24 22:15:58.457636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.442 [2024-07-24 22:15:58.457653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-07-24 22:15:58.467484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.442 [2024-07-24 22:15:58.467566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.442 [2024-07-24 22:15:58.467584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.442 [2024-07-24 22:15:58.467594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.442 [2024-07-24 22:15:58.467602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.442 [2024-07-24 22:15:58.467619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-07-24 22:15:58.477524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.442 [2024-07-24 22:15:58.477603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.442 [2024-07-24 22:15:58.477624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.442 [2024-07-24 22:15:58.477634] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.442 [2024-07-24 22:15:58.477643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.442 [2024-07-24 22:15:58.477660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-07-24 22:15:58.487591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.442 [2024-07-24 22:15:58.487702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.442 [2024-07-24 22:15:58.487723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.442 [2024-07-24 22:15:58.487733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.442 [2024-07-24 22:15:58.487742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.442 [2024-07-24 22:15:58.487759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-07-24 22:15:58.497559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.442 [2024-07-24 22:15:58.497639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.442 [2024-07-24 22:15:58.497657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.442 [2024-07-24 22:15:58.497667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.442 [2024-07-24 22:15:58.497676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.442 [2024-07-24 22:15:58.497693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-07-24 22:15:58.507611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.442 [2024-07-24 22:15:58.507695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.442 [2024-07-24 22:15:58.507712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.442 [2024-07-24 22:15:58.507728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.442 [2024-07-24 22:15:58.507736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.442 [2024-07-24 22:15:58.507753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-07-24 22:15:58.517618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.442 [2024-07-24 22:15:58.517698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.442 [2024-07-24 22:15:58.517719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.442 [2024-07-24 22:15:58.517729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.442 [2024-07-24 22:15:58.517738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.442 [2024-07-24 22:15:58.517759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-07-24 22:15:58.527666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.442 [2024-07-24 22:15:58.527771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.442 [2024-07-24 22:15:58.527789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.442 [2024-07-24 22:15:58.527799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.442 [2024-07-24 22:15:58.527808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.443 [2024-07-24 22:15:58.527825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-07-24 22:15:58.537683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.443 [2024-07-24 22:15:58.537759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.443 [2024-07-24 22:15:58.537776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.443 [2024-07-24 22:15:58.537786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.443 [2024-07-24 22:15:58.537794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.443 [2024-07-24 22:15:58.537811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-07-24 22:15:58.547726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.443 [2024-07-24 22:15:58.547805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.443 [2024-07-24 22:15:58.547823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.443 [2024-07-24 22:15:58.547833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.443 [2024-07-24 22:15:58.547843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.443 [2024-07-24 22:15:58.547860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-07-24 22:15:58.557683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.443 [2024-07-24 22:15:58.557775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.443 [2024-07-24 22:15:58.557792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.443 [2024-07-24 22:15:58.557802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.443 [2024-07-24 22:15:58.557810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.443 [2024-07-24 22:15:58.557827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-07-24 22:15:58.567767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.443 [2024-07-24 22:15:58.567855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.443 [2024-07-24 22:15:58.567875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.443 [2024-07-24 22:15:58.567885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.443 [2024-07-24 22:15:58.567894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.443 [2024-07-24 22:15:58.567911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-07-24 22:15:58.577801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.443 [2024-07-24 22:15:58.577877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.443 [2024-07-24 22:15:58.577895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.443 [2024-07-24 22:15:58.577905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.443 [2024-07-24 22:15:58.577913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.443 [2024-07-24 22:15:58.577930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-07-24 22:15:58.587881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.443 [2024-07-24 22:15:58.587963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.443 [2024-07-24 22:15:58.587981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.443 [2024-07-24 22:15:58.587991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.443 [2024-07-24 22:15:58.588000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.443 [2024-07-24 22:15:58.588016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-07-24 22:15:58.597940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.443 [2024-07-24 22:15:58.598018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.443 [2024-07-24 22:15:58.598035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.443 [2024-07-24 22:15:58.598045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.443 [2024-07-24 22:15:58.598054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.443 [2024-07-24 22:15:58.598070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-07-24 22:15:58.607921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.443 [2024-07-24 22:15:58.608026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.443 [2024-07-24 22:15:58.608043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.443 [2024-07-24 22:15:58.608053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.443 [2024-07-24 22:15:58.608065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.443 [2024-07-24 22:15:58.608082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-07-24 22:15:58.617902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.443 [2024-07-24 22:15:58.617989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.443 [2024-07-24 22:15:58.618007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.443 [2024-07-24 22:15:58.618017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.443 [2024-07-24 22:15:58.618026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.443 [2024-07-24 22:15:58.618043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-07-24 22:15:58.627953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.443 [2024-07-24 22:15:58.628036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.443 [2024-07-24 22:15:58.628053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.443 [2024-07-24 22:15:58.628063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.443 [2024-07-24 22:15:58.628071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.443 [2024-07-24 22:15:58.628087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-07-24 22:15:58.637894] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.443 [2024-07-24 22:15:58.637975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.443 [2024-07-24 22:15:58.637992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.443 [2024-07-24 22:15:58.638002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.443 [2024-07-24 22:15:58.638010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.443 [2024-07-24 22:15:58.638027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-07-24 22:15:58.648014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.443 [2024-07-24 22:15:58.648093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.443 [2024-07-24 22:15:58.648110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.443 [2024-07-24 22:15:58.648120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.443 [2024-07-24 22:15:58.648129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.443 [2024-07-24 22:15:58.648145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.704 [2024-07-24 22:15:58.658049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.704 [2024-07-24 22:15:58.658134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.704 [2024-07-24 22:15:58.658153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.704 [2024-07-24 22:15:58.658163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.704 [2024-07-24 22:15:58.658172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.704 [2024-07-24 22:15:58.658189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.704 qpair failed and we were unable to recover it. 00:28:19.704 [2024-07-24 22:15:58.668080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.704 [2024-07-24 22:15:58.668159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.704 [2024-07-24 22:15:58.668177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.704 [2024-07-24 22:15:58.668187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.704 [2024-07-24 22:15:58.668196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.704 [2024-07-24 22:15:58.668213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.704 qpair failed and we were unable to recover it. 00:28:19.704 [2024-07-24 22:15:58.678095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.704 [2024-07-24 22:15:58.678181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.704 [2024-07-24 22:15:58.678199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.704 [2024-07-24 22:15:58.678209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.704 [2024-07-24 22:15:58.678217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.704 [2024-07-24 22:15:58.678235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.704 qpair failed and we were unable to recover it. 00:28:19.704 [2024-07-24 22:15:58.688113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.704 [2024-07-24 22:15:58.688199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.704 [2024-07-24 22:15:58.688217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.704 [2024-07-24 22:15:58.688227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.704 [2024-07-24 22:15:58.688236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.704 [2024-07-24 22:15:58.688252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.704 qpair failed and we were unable to recover it. 00:28:19.704 [2024-07-24 22:15:58.698148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.704 [2024-07-24 22:15:58.698228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.704 [2024-07-24 22:15:58.698245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.704 [2024-07-24 22:15:58.698255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.704 [2024-07-24 22:15:58.698267] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.704 [2024-07-24 22:15:58.698284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.704 qpair failed and we were unable to recover it. 00:28:19.704 [2024-07-24 22:15:58.708113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.704 [2024-07-24 22:15:58.708196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.704 [2024-07-24 22:15:58.708214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.704 [2024-07-24 22:15:58.708224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.704 [2024-07-24 22:15:58.708232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.704 [2024-07-24 22:15:58.708249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.704 qpair failed and we were unable to recover it. 00:28:19.704 [2024-07-24 22:15:58.718194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.704 [2024-07-24 22:15:58.718272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.704 [2024-07-24 22:15:58.718290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.704 [2024-07-24 22:15:58.718300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.704 [2024-07-24 22:15:58.718309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.705 [2024-07-24 22:15:58.718326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.705 qpair failed and we were unable to recover it. 00:28:19.705 [2024-07-24 22:15:58.728281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.705 [2024-07-24 22:15:58.728383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.705 [2024-07-24 22:15:58.728400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.705 [2024-07-24 22:15:58.728409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.705 [2024-07-24 22:15:58.728418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.705 [2024-07-24 22:15:58.728435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.705 qpair failed and we were unable to recover it. 00:28:19.705 [2024-07-24 22:15:58.738291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.705 [2024-07-24 22:15:58.738403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.705 [2024-07-24 22:15:58.738421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.705 [2024-07-24 22:15:58.738430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.705 [2024-07-24 22:15:58.738439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.705 [2024-07-24 22:15:58.738456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.705 qpair failed and we were unable to recover it. 00:28:19.705 [2024-07-24 22:15:58.748202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.705 [2024-07-24 22:15:58.748318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.705 [2024-07-24 22:15:58.748336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.705 [2024-07-24 22:15:58.748346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.705 [2024-07-24 22:15:58.748354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.705 [2024-07-24 22:15:58.748371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.705 qpair failed and we were unable to recover it. 00:28:19.705 [2024-07-24 22:15:58.758310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.705 [2024-07-24 22:15:58.758393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.705 [2024-07-24 22:15:58.758410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.705 [2024-07-24 22:15:58.758420] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.705 [2024-07-24 22:15:58.758428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.705 [2024-07-24 22:15:58.758445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.705 qpair failed and we were unable to recover it. 00:28:19.705 [2024-07-24 22:15:58.768345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.705 [2024-07-24 22:15:58.768420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.705 [2024-07-24 22:15:58.768438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.705 [2024-07-24 22:15:58.768448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.705 [2024-07-24 22:15:58.768456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.705 [2024-07-24 22:15:58.768474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.705 qpair failed and we were unable to recover it. 00:28:19.705 [2024-07-24 22:15:58.778358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.705 [2024-07-24 22:15:58.778438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.705 [2024-07-24 22:15:58.778456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.705 [2024-07-24 22:15:58.778466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.705 [2024-07-24 22:15:58.778475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.705 [2024-07-24 22:15:58.778492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.705 qpair failed and we were unable to recover it. 00:28:19.705 [2024-07-24 22:15:58.788387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.705 [2024-07-24 22:15:58.788462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.705 [2024-07-24 22:15:58.788480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.705 [2024-07-24 22:15:58.788493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.705 [2024-07-24 22:15:58.788501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.705 [2024-07-24 22:15:58.788518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.705 qpair failed and we were unable to recover it. 00:28:19.705 [2024-07-24 22:15:58.798415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.705 [2024-07-24 22:15:58.798496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.705 [2024-07-24 22:15:58.798513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.705 [2024-07-24 22:15:58.798523] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.705 [2024-07-24 22:15:58.798532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.705 [2024-07-24 22:15:58.798549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.705 qpair failed and we were unable to recover it. 00:28:19.705 [2024-07-24 22:15:58.808448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.705 [2024-07-24 22:15:58.808526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.705 [2024-07-24 22:15:58.808543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.705 [2024-07-24 22:15:58.808553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.705 [2024-07-24 22:15:58.808562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.705 [2024-07-24 22:15:58.808578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.705 qpair failed and we were unable to recover it. 00:28:19.705 [2024-07-24 22:15:58.818461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.705 [2024-07-24 22:15:58.818544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.705 [2024-07-24 22:15:58.818562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.705 [2024-07-24 22:15:58.818571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.705 [2024-07-24 22:15:58.818580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.705 [2024-07-24 22:15:58.818597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.705 qpair failed and we were unable to recover it. 00:28:19.705 [2024-07-24 22:15:58.828542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.705 [2024-07-24 22:15:58.828622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.705 [2024-07-24 22:15:58.828640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.705 [2024-07-24 22:15:58.828650] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.705 [2024-07-24 22:15:58.828658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.705 [2024-07-24 22:15:58.828675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.705 qpair failed and we were unable to recover it. 00:28:19.705 [2024-07-24 22:15:58.838499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.705 [2024-07-24 22:15:58.838577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.705 [2024-07-24 22:15:58.838595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.705 [2024-07-24 22:15:58.838604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.705 [2024-07-24 22:15:58.838613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.705 [2024-07-24 22:15:58.838630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.705 qpair failed and we were unable to recover it. 00:28:19.705 [2024-07-24 22:15:58.848561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.705 [2024-07-24 22:15:58.848641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.705 [2024-07-24 22:15:58.848658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.705 [2024-07-24 22:15:58.848668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.705 [2024-07-24 22:15:58.848677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.705 [2024-07-24 22:15:58.848694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.705 qpair failed and we were unable to recover it. 00:28:19.705 [2024-07-24 22:15:58.858598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.706 [2024-07-24 22:15:58.858677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.706 [2024-07-24 22:15:58.858695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.706 [2024-07-24 22:15:58.858705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.706 [2024-07-24 22:15:58.858720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.706 [2024-07-24 22:15:58.858737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.706 qpair failed and we were unable to recover it. 00:28:19.706 [2024-07-24 22:15:58.868538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.706 [2024-07-24 22:15:58.868616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.706 [2024-07-24 22:15:58.868634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.706 [2024-07-24 22:15:58.868644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.706 [2024-07-24 22:15:58.868653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.706 [2024-07-24 22:15:58.868670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.706 qpair failed and we were unable to recover it. 00:28:19.706 [2024-07-24 22:15:58.878658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.706 [2024-07-24 22:15:58.878744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.706 [2024-07-24 22:15:58.878763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.706 [2024-07-24 22:15:58.878776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.706 [2024-07-24 22:15:58.878784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.706 [2024-07-24 22:15:58.878802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.706 qpair failed and we were unable to recover it. 00:28:19.706 [2024-07-24 22:15:58.888684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.706 [2024-07-24 22:15:58.888796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.706 [2024-07-24 22:15:58.888813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.706 [2024-07-24 22:15:58.888824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.706 [2024-07-24 22:15:58.888832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.706 [2024-07-24 22:15:58.888849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.706 qpair failed and we were unable to recover it. 00:28:19.706 [2024-07-24 22:15:58.898730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.706 [2024-07-24 22:15:58.898844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.706 [2024-07-24 22:15:58.898861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.706 [2024-07-24 22:15:58.898871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.706 [2024-07-24 22:15:58.898880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.706 [2024-07-24 22:15:58.898897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.706 qpair failed and we were unable to recover it. 00:28:19.706 [2024-07-24 22:15:58.908728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.706 [2024-07-24 22:15:58.908810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.706 [2024-07-24 22:15:58.908828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.706 [2024-07-24 22:15:58.908838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.706 [2024-07-24 22:15:58.908847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.706 [2024-07-24 22:15:58.908864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.706 qpair failed and we were unable to recover it. 00:28:19.967 [2024-07-24 22:15:58.918687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.967 [2024-07-24 22:15:58.918770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.967 [2024-07-24 22:15:58.918788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.967 [2024-07-24 22:15:58.918798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.967 [2024-07-24 22:15:58.918807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.967 [2024-07-24 22:15:58.918825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.967 qpair failed and we were unable to recover it. 00:28:19.967 [2024-07-24 22:15:58.928780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.967 [2024-07-24 22:15:58.928858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.967 [2024-07-24 22:15:58.928876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.967 [2024-07-24 22:15:58.928886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.967 [2024-07-24 22:15:58.928895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.967 [2024-07-24 22:15:58.928912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.967 qpair failed and we were unable to recover it. 00:28:19.967 [2024-07-24 22:15:58.938799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.967 [2024-07-24 22:15:58.938880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.967 [2024-07-24 22:15:58.938898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.967 [2024-07-24 22:15:58.938908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.967 [2024-07-24 22:15:58.938917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.967 [2024-07-24 22:15:58.938934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.967 qpair failed and we were unable to recover it. 00:28:19.967 [2024-07-24 22:15:58.948848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.967 [2024-07-24 22:15:58.948927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.967 [2024-07-24 22:15:58.948945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.967 [2024-07-24 22:15:58.948955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.967 [2024-07-24 22:15:58.948964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.967 [2024-07-24 22:15:58.948981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.967 qpair failed and we were unable to recover it. 00:28:19.967 [2024-07-24 22:15:58.958875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.967 [2024-07-24 22:15:58.958954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.967 [2024-07-24 22:15:58.958972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.967 [2024-07-24 22:15:58.958982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.967 [2024-07-24 22:15:58.958990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.967 [2024-07-24 22:15:58.959007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.967 qpair failed and we were unable to recover it. 00:28:19.967 [2024-07-24 22:15:58.968920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.967 [2024-07-24 22:15:58.969036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.967 [2024-07-24 22:15:58.969054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.967 [2024-07-24 22:15:58.969067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.967 [2024-07-24 22:15:58.969076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.967 [2024-07-24 22:15:58.969093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.967 qpair failed and we were unable to recover it. 00:28:19.967 [2024-07-24 22:15:58.978936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.967 [2024-07-24 22:15:58.979009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.967 [2024-07-24 22:15:58.979027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.967 [2024-07-24 22:15:58.979036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.967 [2024-07-24 22:15:58.979045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.967 [2024-07-24 22:15:58.979062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.967 qpair failed and we were unable to recover it. 00:28:19.967 [2024-07-24 22:15:58.988970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.967 [2024-07-24 22:15:58.989050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.967 [2024-07-24 22:15:58.989068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.967 [2024-07-24 22:15:58.989078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.967 [2024-07-24 22:15:58.989087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.967 [2024-07-24 22:15:58.989103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.967 qpair failed and we were unable to recover it. 00:28:19.967 [2024-07-24 22:15:58.999009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.967 [2024-07-24 22:15:58.999086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.967 [2024-07-24 22:15:58.999104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.967 [2024-07-24 22:15:58.999114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.967 [2024-07-24 22:15:58.999122] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.967 [2024-07-24 22:15:58.999139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.967 qpair failed and we were unable to recover it. 00:28:19.967 [2024-07-24 22:15:59.009032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.967 [2024-07-24 22:15:59.009110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.967 [2024-07-24 22:15:59.009128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.967 [2024-07-24 22:15:59.009138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.967 [2024-07-24 22:15:59.009146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.967 [2024-07-24 22:15:59.009163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.967 qpair failed and we were unable to recover it. 00:28:19.967 [2024-07-24 22:15:59.019055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.967 [2024-07-24 22:15:59.019134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.967 [2024-07-24 22:15:59.019151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.967 [2024-07-24 22:15:59.019162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.967 [2024-07-24 22:15:59.019170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.967 [2024-07-24 22:15:59.019187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.967 qpair failed and we were unable to recover it. 00:28:19.967 [2024-07-24 22:15:59.029048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.967 [2024-07-24 22:15:59.029137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.967 [2024-07-24 22:15:59.029155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.967 [2024-07-24 22:15:59.029164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.967 [2024-07-24 22:15:59.029173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.967 [2024-07-24 22:15:59.029190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.967 qpair failed and we were unable to recover it. 00:28:19.967 [2024-07-24 22:15:59.039077] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.967 [2024-07-24 22:15:59.039160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.967 [2024-07-24 22:15:59.039177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.967 [2024-07-24 22:15:59.039187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.968 [2024-07-24 22:15:59.039196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.968 [2024-07-24 22:15:59.039212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.968 qpair failed and we were unable to recover it. 00:28:19.968 [2024-07-24 22:15:59.049123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.968 [2024-07-24 22:15:59.049203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.968 [2024-07-24 22:15:59.049220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.968 [2024-07-24 22:15:59.049230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.968 [2024-07-24 22:15:59.049238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.968 [2024-07-24 22:15:59.049255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.968 qpair failed and we were unable to recover it. 00:28:19.968 [2024-07-24 22:15:59.059149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.968 [2024-07-24 22:15:59.059221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.968 [2024-07-24 22:15:59.059241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.968 [2024-07-24 22:15:59.059251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.968 [2024-07-24 22:15:59.059260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.968 [2024-07-24 22:15:59.059277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.968 qpair failed and we were unable to recover it. 00:28:19.968 [2024-07-24 22:15:59.069156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.968 [2024-07-24 22:15:59.069240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.968 [2024-07-24 22:15:59.069258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.968 [2024-07-24 22:15:59.069268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.968 [2024-07-24 22:15:59.069277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.968 [2024-07-24 22:15:59.069293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.968 qpair failed and we were unable to recover it. 00:28:19.968 [2024-07-24 22:15:59.079173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.968 [2024-07-24 22:15:59.079258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.968 [2024-07-24 22:15:59.079275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.968 [2024-07-24 22:15:59.079285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.968 [2024-07-24 22:15:59.079294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.968 [2024-07-24 22:15:59.079311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.968 qpair failed and we were unable to recover it. 00:28:19.968 [2024-07-24 22:15:59.089239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.968 [2024-07-24 22:15:59.089320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.968 [2024-07-24 22:15:59.089337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.968 [2024-07-24 22:15:59.089347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.968 [2024-07-24 22:15:59.089356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.968 [2024-07-24 22:15:59.089373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.968 qpair failed and we were unable to recover it. 00:28:19.968 [2024-07-24 22:15:59.099254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.968 [2024-07-24 22:15:59.099418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.968 [2024-07-24 22:15:59.099435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.968 [2024-07-24 22:15:59.099445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.968 [2024-07-24 22:15:59.099453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.968 [2024-07-24 22:15:59.099471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.968 qpair failed and we were unable to recover it. 00:28:19.968 [2024-07-24 22:15:59.109279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.968 [2024-07-24 22:15:59.109407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.968 [2024-07-24 22:15:59.109425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.968 [2024-07-24 22:15:59.109434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.968 [2024-07-24 22:15:59.109443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.968 [2024-07-24 22:15:59.109460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.968 qpair failed and we were unable to recover it. 00:28:19.968 [2024-07-24 22:15:59.119310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.968 [2024-07-24 22:15:59.119412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.968 [2024-07-24 22:15:59.119430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.968 [2024-07-24 22:15:59.119439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.968 [2024-07-24 22:15:59.119448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.968 [2024-07-24 22:15:59.119465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.968 qpair failed and we were unable to recover it. 00:28:19.968 [2024-07-24 22:15:59.129319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.968 [2024-07-24 22:15:59.129392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.968 [2024-07-24 22:15:59.129410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.968 [2024-07-24 22:15:59.129420] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.968 [2024-07-24 22:15:59.129429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.968 [2024-07-24 22:15:59.129446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.968 qpair failed and we were unable to recover it. 00:28:19.968 [2024-07-24 22:15:59.139372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.968 [2024-07-24 22:15:59.139445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.968 [2024-07-24 22:15:59.139463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.968 [2024-07-24 22:15:59.139473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.968 [2024-07-24 22:15:59.139481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.968 [2024-07-24 22:15:59.139498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.968 qpair failed and we were unable to recover it. 00:28:19.968 [2024-07-24 22:15:59.149425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.968 [2024-07-24 22:15:59.149552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.968 [2024-07-24 22:15:59.149572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.968 [2024-07-24 22:15:59.149582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.968 [2024-07-24 22:15:59.149591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.968 [2024-07-24 22:15:59.149608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.968 qpair failed and we were unable to recover it. 00:28:19.968 [2024-07-24 22:15:59.159425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.968 [2024-07-24 22:15:59.159511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.968 [2024-07-24 22:15:59.159528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.968 [2024-07-24 22:15:59.159538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.968 [2024-07-24 22:15:59.159547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.968 [2024-07-24 22:15:59.159564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.968 qpair failed and we were unable to recover it. 00:28:19.968 [2024-07-24 22:15:59.169446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.968 [2024-07-24 22:15:59.169538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.968 [2024-07-24 22:15:59.169556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.968 [2024-07-24 22:15:59.169566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.968 [2024-07-24 22:15:59.169574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:19.969 [2024-07-24 22:15:59.169591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.969 qpair failed and we were unable to recover it. 00:28:20.229 [2024-07-24 22:15:59.179478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.229 [2024-07-24 22:15:59.179605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.229 [2024-07-24 22:15:59.179623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.230 [2024-07-24 22:15:59.179633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.230 [2024-07-24 22:15:59.179642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:20.230 [2024-07-24 22:15:59.179659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.230 qpair failed and we were unable to recover it. 00:28:20.230 [2024-07-24 22:15:59.189519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.230 [2024-07-24 22:15:59.189594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.230 [2024-07-24 22:15:59.189611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.230 [2024-07-24 22:15:59.189621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.230 [2024-07-24 22:15:59.189630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:20.230 [2024-07-24 22:15:59.189650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.230 qpair failed and we were unable to recover it. 00:28:20.230 [2024-07-24 22:15:59.199546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.230 [2024-07-24 22:15:59.199638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.230 [2024-07-24 22:15:59.199656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.230 [2024-07-24 22:15:59.199665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.230 [2024-07-24 22:15:59.199674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:20.230 [2024-07-24 22:15:59.199691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.230 qpair failed and we were unable to recover it. 00:28:20.230 [2024-07-24 22:15:59.209496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.230 [2024-07-24 22:15:59.209586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.230 [2024-07-24 22:15:59.209604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.230 [2024-07-24 22:15:59.209613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.230 [2024-07-24 22:15:59.209622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:20.230 [2024-07-24 22:15:59.209639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.230 qpair failed and we were unable to recover it. 00:28:20.230 [2024-07-24 22:15:59.219514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.230 [2024-07-24 22:15:59.219591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.230 [2024-07-24 22:15:59.219608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.230 [2024-07-24 22:15:59.219619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.230 [2024-07-24 22:15:59.219627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:20.230 [2024-07-24 22:15:59.219644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.230 qpair failed and we were unable to recover it. 00:28:20.230 [2024-07-24 22:15:59.229632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.230 [2024-07-24 22:15:59.229790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.230 [2024-07-24 22:15:59.229808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.230 [2024-07-24 22:15:59.229818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.230 [2024-07-24 22:15:59.229827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:20.230 [2024-07-24 22:15:59.229844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.230 qpair failed and we were unable to recover it. 00:28:20.230 [2024-07-24 22:15:59.239650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.230 [2024-07-24 22:15:59.239740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.230 [2024-07-24 22:15:59.239761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.230 [2024-07-24 22:15:59.239771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.230 [2024-07-24 22:15:59.239780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:20.230 [2024-07-24 22:15:59.239797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.230 qpair failed and we were unable to recover it. 00:28:20.230 [2024-07-24 22:15:59.249679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.230 [2024-07-24 22:15:59.249759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.230 [2024-07-24 22:15:59.249777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.230 [2024-07-24 22:15:59.249787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.230 [2024-07-24 22:15:59.249796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:20.230 [2024-07-24 22:15:59.249812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.230 qpair failed and we were unable to recover it. 00:28:20.230 [2024-07-24 22:15:59.259706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.230 [2024-07-24 22:15:59.259787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.230 [2024-07-24 22:15:59.259805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.230 [2024-07-24 22:15:59.259815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.230 [2024-07-24 22:15:59.259823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:20.230 [2024-07-24 22:15:59.259841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.230 qpair failed and we were unable to recover it. 00:28:20.230 [2024-07-24 22:15:59.269742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.230 [2024-07-24 22:15:59.269821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.230 [2024-07-24 22:15:59.269838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.230 [2024-07-24 22:15:59.269848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.230 [2024-07-24 22:15:59.269857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:20.230 [2024-07-24 22:15:59.269874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.230 qpair failed and we were unable to recover it. 00:28:20.230 [2024-07-24 22:15:59.279767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.230 [2024-07-24 22:15:59.279848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.230 [2024-07-24 22:15:59.279866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.230 [2024-07-24 22:15:59.279876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.230 [2024-07-24 22:15:59.279884] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:20.230 [2024-07-24 22:15:59.279904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.230 qpair failed and we were unable to recover it. 00:28:20.230 [2024-07-24 22:15:59.289805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.230 [2024-07-24 22:15:59.289888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.230 [2024-07-24 22:15:59.289905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.230 [2024-07-24 22:15:59.289915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.230 [2024-07-24 22:15:59.289924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:20.230 [2024-07-24 22:15:59.289941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.230 qpair failed and we were unable to recover it. 00:28:20.230 [2024-07-24 22:15:59.299824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.230 [2024-07-24 22:15:59.299905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.230 [2024-07-24 22:15:59.299923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.230 [2024-07-24 22:15:59.299933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.230 [2024-07-24 22:15:59.299941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:20.230 [2024-07-24 22:15:59.299958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.230 qpair failed and we were unable to recover it. 00:28:20.230 [2024-07-24 22:15:59.309839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.230 [2024-07-24 22:15:59.309933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.230 [2024-07-24 22:15:59.309950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.230 [2024-07-24 22:15:59.309960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.230 [2024-07-24 22:15:59.309969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:20.230 [2024-07-24 22:15:59.309986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.231 qpair failed and we were unable to recover it. 00:28:20.231 [2024-07-24 22:15:59.319878] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.231 [2024-07-24 22:15:59.319958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.231 [2024-07-24 22:15:59.319976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.231 [2024-07-24 22:15:59.319987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.231 [2024-07-24 22:15:59.319995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:20.231 [2024-07-24 22:15:59.320012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.231 qpair failed and we were unable to recover it. 00:28:20.231 [2024-07-24 22:15:59.329880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.231 [2024-07-24 22:15:59.329962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.231 [2024-07-24 22:15:59.329982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.231 [2024-07-24 22:15:59.329993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.231 [2024-07-24 22:15:59.330003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:20.231 [2024-07-24 22:15:59.330019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.231 qpair failed and we were unable to recover it. 00:28:20.231 [2024-07-24 22:15:59.339918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.231 [2024-07-24 22:15:59.340001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.231 [2024-07-24 22:15:59.340019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.231 [2024-07-24 22:15:59.340029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.231 [2024-07-24 22:15:59.340038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:20.231 [2024-07-24 22:15:59.340055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.231 qpair failed and we were unable to recover it. 00:28:20.231 [2024-07-24 22:15:59.349944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.231 [2024-07-24 22:15:59.350024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.231 [2024-07-24 22:15:59.350042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.231 [2024-07-24 22:15:59.350053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.231 [2024-07-24 22:15:59.350062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:20.231 [2024-07-24 22:15:59.350079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.231 qpair failed and we were unable to recover it. 00:28:20.231 [2024-07-24 22:15:59.359914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.231 [2024-07-24 22:15:59.359992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.231 [2024-07-24 22:15:59.360011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.231 [2024-07-24 22:15:59.360021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.231 [2024-07-24 22:15:59.360030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:20.231 [2024-07-24 22:15:59.360047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.231 qpair failed and we were unable to recover it. 00:28:20.231 [2024-07-24 22:15:59.370025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.231 [2024-07-24 22:15:59.370115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.231 [2024-07-24 22:15:59.370133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.231 [2024-07-24 22:15:59.370143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.231 [2024-07-24 22:15:59.370155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:20.231 [2024-07-24 22:15:59.370173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.231 qpair failed and we were unable to recover it. 00:28:20.231 [2024-07-24 22:15:59.380043] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.231 [2024-07-24 22:15:59.380121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.231 [2024-07-24 22:15:59.380140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.231 [2024-07-24 22:15:59.380149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.231 [2024-07-24 22:15:59.380158] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:20.231 [2024-07-24 22:15:59.380175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.231 qpair failed and we were unable to recover it. 00:28:20.231 [2024-07-24 22:15:59.390061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.231 [2024-07-24 22:15:59.390166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.231 [2024-07-24 22:15:59.390183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.231 [2024-07-24 22:15:59.390193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.231 [2024-07-24 22:15:59.390202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:20.231 [2024-07-24 22:15:59.390219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.231 qpair failed and we were unable to recover it. 00:28:20.231 [2024-07-24 22:15:59.400080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.231 [2024-07-24 22:15:59.400163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.231 [2024-07-24 22:15:59.400181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.231 [2024-07-24 22:15:59.400191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.231 [2024-07-24 22:15:59.400199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:20.231 [2024-07-24 22:15:59.400216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.231 qpair failed and we were unable to recover it. 00:28:20.231 [2024-07-24 22:15:59.410114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.231 [2024-07-24 22:15:59.410194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.231 [2024-07-24 22:15:59.410212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.231 [2024-07-24 22:15:59.410223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.231 [2024-07-24 22:15:59.410232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:20.231 [2024-07-24 22:15:59.410249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.231 qpair failed and we were unable to recover it. 00:28:20.231 [2024-07-24 22:15:59.420164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.231 [2024-07-24 22:15:59.420242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.231 [2024-07-24 22:15:59.420260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.231 [2024-07-24 22:15:59.420270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.231 [2024-07-24 22:15:59.420279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:20.231 [2024-07-24 22:15:59.420296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.231 qpair failed and we were unable to recover it. 00:28:20.231 [2024-07-24 22:15:59.430209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.231 [2024-07-24 22:15:59.430285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.231 [2024-07-24 22:15:59.430303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.231 [2024-07-24 22:15:59.430313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.231 [2024-07-24 22:15:59.430322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:20.231 [2024-07-24 22:15:59.430339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.231 qpair failed and we were unable to recover it. 00:28:20.231 [2024-07-24 22:15:59.440149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.231 [2024-07-24 22:15:59.440226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.231 [2024-07-24 22:15:59.440243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.231 [2024-07-24 22:15:59.440253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.231 [2024-07-24 22:15:59.440262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:20.231 [2024-07-24 22:15:59.440279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.231 qpair failed and we were unable to recover it. 00:28:20.492 [2024-07-24 22:15:59.450184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.492 [2024-07-24 22:15:59.450262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.492 [2024-07-24 22:15:59.450280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.492 [2024-07-24 22:15:59.450290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.492 [2024-07-24 22:15:59.450299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:20.492 [2024-07-24 22:15:59.450316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.492 qpair failed and we were unable to recover it. 00:28:20.492 [2024-07-24 22:15:59.460289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.492 [2024-07-24 22:15:59.460367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.492 [2024-07-24 22:15:59.460385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.492 [2024-07-24 22:15:59.460395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.492 [2024-07-24 22:15:59.460407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:20.492 [2024-07-24 22:15:59.460424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.492 qpair failed and we were unable to recover it. 00:28:20.492 [2024-07-24 22:15:59.470262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.492 [2024-07-24 22:15:59.470391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.492 [2024-07-24 22:15:59.470409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.492 [2024-07-24 22:15:59.470419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.492 [2024-07-24 22:15:59.470428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:20.492 [2024-07-24 22:15:59.470445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.492 qpair failed and we were unable to recover it. 00:28:20.492 [2024-07-24 22:15:59.480343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.492 [2024-07-24 22:15:59.480426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.492 [2024-07-24 22:15:59.480444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.492 [2024-07-24 22:15:59.480454] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.492 [2024-07-24 22:15:59.480463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:20.492 [2024-07-24 22:15:59.480480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.492 qpair failed and we were unable to recover it. 00:28:20.492 [2024-07-24 22:15:59.490284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.492 [2024-07-24 22:15:59.490360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.492 [2024-07-24 22:15:59.490378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.492 [2024-07-24 22:15:59.490388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.492 [2024-07-24 22:15:59.490398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:20.492 [2024-07-24 22:15:59.490415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.492 qpair failed and we were unable to recover it. 00:28:20.492 [2024-07-24 22:15:59.500319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.492 [2024-07-24 22:15:59.500394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.492 [2024-07-24 22:15:59.500412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.492 [2024-07-24 22:15:59.500421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.492 [2024-07-24 22:15:59.500431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:20.492 [2024-07-24 22:15:59.500448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.492 qpair failed and we were unable to recover it. 00:28:20.492 [2024-07-24 22:15:59.510345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.492 [2024-07-24 22:15:59.510435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.492 [2024-07-24 22:15:59.510454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.492 [2024-07-24 22:15:59.510464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.492 [2024-07-24 22:15:59.510472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:20.492 [2024-07-24 22:15:59.510490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.492 qpair failed and we were unable to recover it. 00:28:20.492 [2024-07-24 22:15:59.520425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.492 [2024-07-24 22:15:59.520516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.492 [2024-07-24 22:15:59.520534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.492 [2024-07-24 22:15:59.520544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.492 [2024-07-24 22:15:59.520552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f71a0 00:28:20.492 [2024-07-24 22:15:59.520569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.492 qpair failed and we were unable to recover it. 00:28:20.493 [2024-07-24 22:15:59.530479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.493 [2024-07-24 22:15:59.530558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.493 [2024-07-24 22:15:59.530582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.493 [2024-07-24 22:15:59.530594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.493 [2024-07-24 22:15:59.530604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2d70000b90 00:28:20.493 [2024-07-24 22:15:59.530625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:20.493 qpair failed and we were unable to recover it. 00:28:20.493 [2024-07-24 22:15:59.540481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.493 [2024-07-24 22:15:59.540560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.493 [2024-07-24 22:15:59.540578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.493 [2024-07-24 22:15:59.540588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.493 [2024-07-24 22:15:59.540597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2d70000b90 00:28:20.493 [2024-07-24 22:15:59.540616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:20.493 qpair failed and we were unable to recover it. 00:28:20.493 [2024-07-24 22:15:59.550586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.493 [2024-07-24 22:15:59.550703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.493 [2024-07-24 22:15:59.550737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.493 [2024-07-24 22:15:59.550756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.493 [2024-07-24 22:15:59.550769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2d6c000b90 00:28:20.493 [2024-07-24 22:15:59.550796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:20.493 qpair failed and we were unable to recover it. 00:28:20.493 [2024-07-24 22:15:59.560532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.493 [2024-07-24 22:15:59.560661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.493 [2024-07-24 22:15:59.560680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.493 [2024-07-24 22:15:59.560691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.493 [2024-07-24 22:15:59.560700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2d6c000b90 00:28:20.493 [2024-07-24 22:15:59.560723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:20.493 qpair failed and we were unable to recover it. 00:28:20.493 [2024-07-24 22:15:59.570595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.493 [2024-07-24 22:15:59.570764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.493 [2024-07-24 22:15:59.570792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.493 [2024-07-24 22:15:59.570808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.493 [2024-07-24 22:15:59.570820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2d64000b90 00:28:20.493 [2024-07-24 22:15:59.570849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:20.493 qpair failed and we were unable to recover it. 00:28:20.493 [2024-07-24 22:15:59.580602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.493 [2024-07-24 22:15:59.580676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.493 [2024-07-24 22:15:59.580694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.493 [2024-07-24 22:15:59.580704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.493 [2024-07-24 22:15:59.580717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2d64000b90 00:28:20.493 [2024-07-24 22:15:59.580737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:20.493 qpair failed and we were unable to recover it. 00:28:20.493 [2024-07-24 22:15:59.580830] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:28:20.493 A controller has encountered a failure and is being reset. 00:28:20.493 [2024-07-24 22:15:59.580942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1405210 (9): Bad file descriptor 00:28:20.493 Controller properly reset. 00:28:20.493 Initializing NVMe Controllers 00:28:20.493 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:20.493 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:20.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:28:20.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:28:20.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:28:20.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:28:20.493 Initialization complete. Launching workers. 00:28:20.493 Starting thread on core 1 00:28:20.493 Starting thread on core 2 00:28:20.493 Starting thread on core 3 00:28:20.493 Starting thread on core 0 00:28:20.493 22:15:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:28:20.493 00:28:20.493 real 0m11.309s 00:28:20.493 user 0m20.595s 00:28:20.493 sys 0m4.766s 00:28:20.493 22:15:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:20.493 22:15:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:20.493 ************************************ 00:28:20.493 END TEST nvmf_target_disconnect_tc2 00:28:20.493 ************************************ 00:28:20.493 22:15:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:28:20.493 22:15:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:28:20.493 22:15:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:28:20.493 22:15:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:20.493 22:15:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:28:20.493 22:15:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:20.493 22:15:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:28:20.493 22:15:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:20.493 22:15:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:20.753 rmmod nvme_tcp 00:28:20.753 rmmod nvme_fabrics 00:28:20.753 rmmod nvme_keyring 00:28:20.753 22:15:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:20.753 22:15:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:28:20.753 22:15:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:28:20.753 22:15:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2853517 ']' 00:28:20.753 22:15:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2853517 00:28:20.753 22:15:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 2853517 ']' 00:28:20.753 22:15:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 2853517 00:28:20.753 22:15:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:28:20.753 22:15:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:20.753 22:15:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2853517 00:28:20.753 22:15:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:28:20.753 22:15:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:28:20.753 22:15:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2853517' 00:28:20.753 killing process with pid 2853517 00:28:20.753 22:15:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 2853517 00:28:20.753 22:15:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 2853517 00:28:21.011 22:16:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:21.011 22:16:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:21.011 22:16:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:21.011 22:16:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:21.011 22:16:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:21.011 22:16:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:21.011 22:16:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:21.011 22:16:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:22.918 22:16:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:22.918 00:28:22.918 real 0m20.899s 00:28:22.918 user 0m48.082s 00:28:22.918 sys 0m10.501s 00:28:22.918 22:16:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:22.918 22:16:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:22.918 ************************************ 00:28:22.918 END TEST nvmf_target_disconnect 00:28:22.918 ************************************ 00:28:23.178 22:16:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:28:23.178 00:28:23.178 real 6m12.877s 00:28:23.178 user 11m0.271s 00:28:23.178 sys 2m16.379s 00:28:23.178 22:16:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:23.178 22:16:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.178 ************************************ 00:28:23.178 END TEST nvmf_host 00:28:23.178 ************************************ 00:28:23.178 00:28:23.178 real 22m20.835s 00:28:23.178 user 45m36.745s 00:28:23.178 sys 8m16.473s 00:28:23.178 22:16:02 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:23.178 22:16:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:23.178 ************************************ 00:28:23.178 END TEST nvmf_tcp 00:28:23.178 ************************************ 00:28:23.178 22:16:02 -- spdk/autotest.sh@292 -- # [[ 0 -eq 0 ]] 00:28:23.178 22:16:02 -- spdk/autotest.sh@293 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:28:23.178 22:16:02 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:23.178 22:16:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:23.178 22:16:02 -- common/autotest_common.sh@10 -- # set +x 00:28:23.178 ************************************ 00:28:23.178 START TEST spdkcli_nvmf_tcp 00:28:23.178 ************************************ 00:28:23.178 22:16:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:28:23.178 * Looking for test storage... 00:28:23.178 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:28:23.178 22:16:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:28:23.178 22:16:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:28:23.178 22:16:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:28:23.178 22:16:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:23.178 22:16:02 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:28:23.178 22:16:02 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:23.178 22:16:02 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:23.178 22:16:02 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:23.178 22:16:02 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:23.178 22:16:02 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:23.178 22:16:02 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:23.178 22:16:02 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:23.178 22:16:02 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:23.178 22:16:02 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:23.178 22:16:02 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:23.437 22:16:02 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:28:23.437 22:16:02 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:28:23.437 22:16:02 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:23.437 22:16:02 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:23.437 22:16:02 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:23.437 22:16:02 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:23.437 22:16:02 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:23.437 22:16:02 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:23.437 22:16:02 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:23.437 22:16:02 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:23.437 22:16:02 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.437 22:16:02 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.437 22:16:02 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.437 22:16:02 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:28:23.437 22:16:02 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.437 22:16:02 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:28:23.437 22:16:02 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:23.437 22:16:02 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:23.437 22:16:02 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:23.437 22:16:02 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:23.437 22:16:02 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:23.437 22:16:02 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:23.437 22:16:02 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:23.437 22:16:02 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:23.437 22:16:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:28:23.437 22:16:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:28:23.437 22:16:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:28:23.437 22:16:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:28:23.437 22:16:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:23.437 22:16:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:23.437 22:16:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:28:23.437 22:16:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2855076 00:28:23.437 22:16:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2855076 00:28:23.437 22:16:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 2855076 ']' 00:28:23.437 22:16:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:28:23.437 22:16:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:23.437 22:16:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:23.437 22:16:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:23.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:23.437 22:16:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:23.437 22:16:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:23.437 [2024-07-24 22:16:02.465748] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:28:23.437 [2024-07-24 22:16:02.465803] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2855076 ] 00:28:23.437 EAL: No free 2048 kB hugepages reported on node 1 00:28:23.437 [2024-07-24 22:16:02.535363] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:23.437 [2024-07-24 22:16:02.609360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:23.437 [2024-07-24 22:16:02.609364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:24.377 22:16:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:24.377 22:16:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:28:24.377 22:16:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:28:24.377 22:16:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:24.377 22:16:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:24.377 22:16:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:28:24.377 22:16:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:28:24.377 22:16:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:28:24.377 22:16:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:24.377 22:16:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:24.377 22:16:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:28:24.377 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:28:24.377 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:28:24.377 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:28:24.377 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:28:24.377 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:28:24.377 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:28:24.377 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:28:24.377 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:28:24.377 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:28:24.377 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:24.377 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:24.377 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:28:24.377 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:24.377 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:24.377 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:28:24.377 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:24.377 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:28:24.377 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:28:24.377 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:24.377 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:28:24.377 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:28:24.377 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:28:24.377 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:28:24.377 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:24.377 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:28:24.377 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:28:24.377 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:28:24.377 ' 00:28:26.913 [2024-07-24 22:16:05.677763] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:27.849 [2024-07-24 22:16:06.853643] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:28:30.385 [2024-07-24 22:16:09.016196] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:28:31.823 [2024-07-24 22:16:10.902201] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:28:33.201 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:28:33.201 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:28:33.201 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:28:33.201 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:28:33.201 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:28:33.201 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:28:33.201 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:28:33.201 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:28:33.201 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:28:33.201 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:28:33.201 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:28:33.201 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:33.201 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:28:33.201 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:28:33.201 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:33.201 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:28:33.201 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:28:33.201 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:28:33.201 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:28:33.201 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:33.201 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:28:33.201 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:28:33.201 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:28:33.201 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:28:33.201 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:33.201 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:28:33.201 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:28:33.201 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:28:33.459 22:16:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:28:33.459 22:16:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:33.459 22:16:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:33.459 22:16:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:28:33.459 22:16:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:33.459 22:16:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:33.459 22:16:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:28:33.459 22:16:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:28:33.718 22:16:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:28:33.718 22:16:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:28:33.718 22:16:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:28:33.718 22:16:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:33.718 22:16:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:33.718 22:16:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:28:33.718 22:16:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:33.718 22:16:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:33.977 22:16:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:28:33.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:28:33.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:28:33.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:28:33.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:28:33.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:28:33.977 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:28:33.977 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:28:33.977 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:28:33.977 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:28:33.977 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:28:33.977 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:28:33.977 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:28:33.977 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:28:33.977 ' 00:28:39.249 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:28:39.249 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:28:39.249 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:28:39.249 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:28:39.249 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:28:39.249 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:28:39.249 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:28:39.249 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:28:39.249 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:28:39.249 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:28:39.249 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:28:39.249 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:28:39.249 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:28:39.249 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:28:39.249 22:16:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:28:39.249 22:16:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:39.249 22:16:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:39.249 22:16:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2855076 00:28:39.249 22:16:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 2855076 ']' 00:28:39.249 22:16:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 2855076 00:28:39.249 22:16:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:28:39.249 22:16:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:39.249 22:16:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2855076 00:28:39.249 22:16:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:39.249 22:16:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:39.249 22:16:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2855076' 00:28:39.249 killing process with pid 2855076 00:28:39.249 22:16:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 2855076 00:28:39.249 22:16:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 2855076 00:28:39.249 22:16:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:28:39.249 22:16:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:28:39.249 22:16:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2855076 ']' 00:28:39.250 22:16:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2855076 00:28:39.250 22:16:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 2855076 ']' 00:28:39.250 22:16:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 2855076 00:28:39.250 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2855076) - No such process 00:28:39.250 22:16:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 2855076 is not found' 00:28:39.250 Process with pid 2855076 is not found 00:28:39.250 22:16:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:28:39.250 22:16:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:28:39.250 22:16:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:28:39.250 00:28:39.250 real 0m15.869s 00:28:39.250 user 0m32.724s 00:28:39.250 sys 0m0.885s 00:28:39.250 22:16:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:39.250 22:16:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:39.250 ************************************ 00:28:39.250 END TEST spdkcli_nvmf_tcp 00:28:39.250 ************************************ 00:28:39.250 22:16:18 -- spdk/autotest.sh@294 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:28:39.250 22:16:18 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:39.250 22:16:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:39.250 22:16:18 -- common/autotest_common.sh@10 -- # set +x 00:28:39.250 ************************************ 00:28:39.250 START TEST nvmf_identify_passthru 00:28:39.250 ************************************ 00:28:39.250 22:16:18 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:28:39.250 * Looking for test storage... 00:28:39.250 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:39.250 22:16:18 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:39.250 22:16:18 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:28:39.250 22:16:18 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:39.250 22:16:18 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:39.250 22:16:18 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:39.250 22:16:18 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:39.250 22:16:18 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:39.250 22:16:18 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:39.250 22:16:18 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:39.250 22:16:18 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:39.250 22:16:18 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:39.250 22:16:18 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:39.250 22:16:18 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:28:39.250 22:16:18 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:28:39.250 22:16:18 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:39.250 22:16:18 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:39.250 22:16:18 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:39.250 22:16:18 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:39.250 22:16:18 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:39.250 22:16:18 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:39.250 22:16:18 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:39.250 22:16:18 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:39.250 22:16:18 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.250 22:16:18 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.250 22:16:18 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.250 22:16:18 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:28:39.250 22:16:18 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.250 22:16:18 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:28:39.250 22:16:18 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:39.250 22:16:18 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:39.250 22:16:18 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:39.250 22:16:18 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:39.250 22:16:18 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:39.250 22:16:18 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:39.250 22:16:18 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:39.250 22:16:18 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:39.250 22:16:18 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:39.250 22:16:18 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:39.250 22:16:18 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:39.250 22:16:18 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:39.250 22:16:18 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.250 22:16:18 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.250 22:16:18 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.250 22:16:18 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:28:39.250 22:16:18 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.250 22:16:18 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:28:39.250 22:16:18 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:39.250 22:16:18 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:39.250 22:16:18 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:39.250 22:16:18 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:39.250 22:16:18 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:39.250 22:16:18 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:39.250 22:16:18 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:39.250 22:16:18 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:39.250 22:16:18 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:39.250 22:16:18 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:39.250 22:16:18 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:28:39.250 22:16:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:45.819 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:45.819 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:28:45.819 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:45.819 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:45.819 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:45.819 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:45.819 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:45.819 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:28:45.819 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:45.819 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:28:45.819 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:45.820 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:45.820 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:45.820 Found net devices under 0000:af:00.0: cvl_0_0 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:45.820 Found net devices under 0000:af:00.1: cvl_0_1 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:45.820 22:16:24 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:45.820 22:16:25 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:46.079 22:16:25 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:46.079 22:16:25 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:46.079 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:46.079 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:28:46.079 00:28:46.079 --- 10.0.0.2 ping statistics --- 00:28:46.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:46.079 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:28:46.079 22:16:25 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:46.079 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:46.079 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:28:46.079 00:28:46.079 --- 10.0.0.1 ping statistics --- 00:28:46.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:46.079 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:28:46.079 22:16:25 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:46.079 22:16:25 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:28:46.079 22:16:25 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:46.079 22:16:25 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:46.079 22:16:25 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:46.079 22:16:25 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:46.079 22:16:25 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:46.079 22:16:25 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:46.079 22:16:25 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:46.079 22:16:25 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:28:46.079 22:16:25 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:46.079 22:16:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:46.079 22:16:25 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:28:46.079 22:16:25 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:28:46.079 22:16:25 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:28:46.079 22:16:25 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:28:46.079 22:16:25 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:28:46.079 22:16:25 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:28:46.079 22:16:25 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:28:46.079 22:16:25 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:46.079 22:16:25 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:46.079 22:16:25 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:28:46.079 22:16:25 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:28:46.079 22:16:25 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:28:46.079 22:16:25 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:d8:00.0 00:28:46.079 22:16:25 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:d8:00.0 00:28:46.080 22:16:25 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:d8:00.0 ']' 00:28:46.080 22:16:25 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:28:46.080 22:16:25 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:d8:00.0' -i 0 00:28:46.080 22:16:25 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:28:46.338 EAL: No free 2048 kB hugepages reported on node 1 00:28:51.609 22:16:29 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLN916500W71P6AGN 00:28:51.609 22:16:29 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:d8:00.0' -i 0 00:28:51.610 22:16:29 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:28:51.610 22:16:29 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:28:51.610 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.803 22:16:34 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:28:55.803 22:16:34 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:28:55.803 22:16:34 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:55.803 22:16:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:55.803 22:16:34 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:28:55.803 22:16:34 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:55.803 22:16:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:55.803 22:16:34 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2862533 00:28:55.803 22:16:34 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:55.803 22:16:34 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:55.803 22:16:34 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2862533 00:28:55.803 22:16:34 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 2862533 ']' 00:28:55.803 22:16:34 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:55.803 22:16:34 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:55.803 22:16:34 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:55.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:55.803 22:16:34 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:55.803 22:16:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:55.803 [2024-07-24 22:16:34.771540] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:28:55.803 [2024-07-24 22:16:34.771593] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:55.803 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.803 [2024-07-24 22:16:34.843419] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:55.803 [2024-07-24 22:16:34.916876] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:55.803 [2024-07-24 22:16:34.916914] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:55.803 [2024-07-24 22:16:34.916924] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:55.803 [2024-07-24 22:16:34.916932] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:55.803 [2024-07-24 22:16:34.916940] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:55.803 [2024-07-24 22:16:34.917005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:55.803 [2024-07-24 22:16:34.917098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:55.803 [2024-07-24 22:16:34.917185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:55.803 [2024-07-24 22:16:34.917187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.439 22:16:35 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:56.439 22:16:35 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:28:56.439 22:16:35 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:28:56.439 22:16:35 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.439 22:16:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:56.439 INFO: Log level set to 20 00:28:56.439 INFO: Requests: 00:28:56.439 { 00:28:56.439 "jsonrpc": "2.0", 00:28:56.439 "method": "nvmf_set_config", 00:28:56.439 "id": 1, 00:28:56.439 "params": { 00:28:56.439 "admin_cmd_passthru": { 00:28:56.439 "identify_ctrlr": true 00:28:56.439 } 00:28:56.439 } 00:28:56.439 } 00:28:56.439 00:28:56.439 INFO: response: 00:28:56.439 { 00:28:56.439 "jsonrpc": "2.0", 00:28:56.439 "id": 1, 00:28:56.439 "result": true 00:28:56.439 } 00:28:56.439 00:28:56.439 22:16:35 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.439 22:16:35 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:28:56.439 22:16:35 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.439 22:16:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:56.439 INFO: Setting log level to 20 00:28:56.439 INFO: Setting log level to 20 00:28:56.439 INFO: Log level set to 20 00:28:56.439 INFO: Log level set to 20 00:28:56.439 INFO: Requests: 00:28:56.439 { 00:28:56.439 "jsonrpc": "2.0", 00:28:56.439 "method": "framework_start_init", 00:28:56.439 "id": 1 00:28:56.439 } 00:28:56.439 00:28:56.439 INFO: Requests: 00:28:56.439 { 00:28:56.439 "jsonrpc": "2.0", 00:28:56.439 "method": "framework_start_init", 00:28:56.439 "id": 1 00:28:56.439 } 00:28:56.439 00:28:56.698 [2024-07-24 22:16:35.670214] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:28:56.698 INFO: response: 00:28:56.698 { 00:28:56.698 "jsonrpc": "2.0", 00:28:56.698 "id": 1, 00:28:56.698 "result": true 00:28:56.698 } 00:28:56.698 00:28:56.698 INFO: response: 00:28:56.698 { 00:28:56.698 "jsonrpc": "2.0", 00:28:56.698 "id": 1, 00:28:56.698 "result": true 00:28:56.698 } 00:28:56.698 00:28:56.698 22:16:35 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.698 22:16:35 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:56.698 22:16:35 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.698 22:16:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:56.698 INFO: Setting log level to 40 00:28:56.698 INFO: Setting log level to 40 00:28:56.698 INFO: Setting log level to 40 00:28:56.698 [2024-07-24 22:16:35.683690] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:56.698 22:16:35 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.698 22:16:35 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:28:56.698 22:16:35 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:56.698 22:16:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:56.698 22:16:35 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 00:28:56.698 22:16:35 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.698 22:16:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:59.986 Nvme0n1 00:28:59.986 22:16:38 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.986 22:16:38 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:28:59.986 22:16:38 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.986 22:16:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:59.986 22:16:38 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.986 22:16:38 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:59.986 22:16:38 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.986 22:16:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:59.986 22:16:38 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.986 22:16:38 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:59.986 22:16:38 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.986 22:16:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:59.986 [2024-07-24 22:16:38.605742] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:59.986 22:16:38 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.986 22:16:38 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:28:59.986 22:16:38 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.986 22:16:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:59.986 [ 00:28:59.986 { 00:28:59.986 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:59.986 "subtype": "Discovery", 00:28:59.986 "listen_addresses": [], 00:28:59.986 "allow_any_host": true, 00:28:59.986 "hosts": [] 00:28:59.986 }, 00:28:59.986 { 00:28:59.986 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:59.986 "subtype": "NVMe", 00:28:59.986 "listen_addresses": [ 00:28:59.986 { 00:28:59.986 "trtype": "TCP", 00:28:59.986 "adrfam": "IPv4", 00:28:59.986 "traddr": "10.0.0.2", 00:28:59.986 "trsvcid": "4420" 00:28:59.986 } 00:28:59.986 ], 00:28:59.986 "allow_any_host": true, 00:28:59.986 "hosts": [], 00:28:59.986 "serial_number": "SPDK00000000000001", 00:28:59.986 "model_number": "SPDK bdev Controller", 00:28:59.986 "max_namespaces": 1, 00:28:59.986 "min_cntlid": 1, 00:28:59.986 "max_cntlid": 65519, 00:28:59.986 "namespaces": [ 00:28:59.986 { 00:28:59.986 "nsid": 1, 00:28:59.986 "bdev_name": "Nvme0n1", 00:28:59.986 "name": "Nvme0n1", 00:28:59.986 "nguid": "617876A6A03E4C89A936971D6BC96A91", 00:28:59.986 "uuid": "617876a6-a03e-4c89-a936-971d6bc96a91" 00:28:59.986 } 00:28:59.986 ] 00:28:59.986 } 00:28:59.986 ] 00:28:59.986 22:16:38 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.986 22:16:38 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:59.986 22:16:38 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:28:59.987 22:16:38 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:28:59.987 EAL: No free 2048 kB hugepages reported on node 1 00:28:59.987 22:16:38 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLN916500W71P6AGN 00:28:59.987 22:16:38 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:59.987 22:16:38 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:28:59.987 22:16:38 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:28:59.987 EAL: No free 2048 kB hugepages reported on node 1 00:28:59.987 22:16:38 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:28:59.987 22:16:38 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLN916500W71P6AGN '!=' BTLN916500W71P6AGN ']' 00:28:59.987 22:16:38 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:28:59.987 22:16:38 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:59.987 22:16:38 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.987 22:16:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:59.987 22:16:38 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.987 22:16:38 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:28:59.987 22:16:38 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:28:59.987 22:16:38 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:59.987 22:16:38 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:28:59.987 22:16:38 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:59.987 22:16:38 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:28:59.987 22:16:38 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:59.987 22:16:38 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:59.987 rmmod nvme_tcp 00:28:59.987 rmmod nvme_fabrics 00:28:59.987 rmmod nvme_keyring 00:28:59.987 22:16:38 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:59.987 22:16:38 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:28:59.987 22:16:38 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:28:59.987 22:16:38 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 2862533 ']' 00:28:59.987 22:16:38 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 2862533 00:28:59.987 22:16:38 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 2862533 ']' 00:28:59.987 22:16:38 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 2862533 00:28:59.987 22:16:38 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:28:59.987 22:16:38 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:59.987 22:16:38 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2862533 00:28:59.987 22:16:39 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:59.987 22:16:39 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:59.987 22:16:39 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2862533' 00:28:59.987 killing process with pid 2862533 00:28:59.987 22:16:39 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 2862533 00:28:59.987 22:16:39 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 2862533 00:29:01.892 22:16:41 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:01.892 22:16:41 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:01.892 22:16:41 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:01.892 22:16:41 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:01.892 22:16:41 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:01.892 22:16:41 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:01.892 22:16:41 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:01.892 22:16:41 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:04.427 22:16:43 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:04.427 00:29:04.427 real 0m24.949s 00:29:04.427 user 0m33.183s 00:29:04.427 sys 0m6.426s 00:29:04.427 22:16:43 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:04.427 22:16:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:04.427 ************************************ 00:29:04.427 END TEST nvmf_identify_passthru 00:29:04.427 ************************************ 00:29:04.427 22:16:43 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:29:04.427 22:16:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:04.427 22:16:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:04.427 22:16:43 -- common/autotest_common.sh@10 -- # set +x 00:29:04.427 ************************************ 00:29:04.427 START TEST nvmf_dif 00:29:04.427 ************************************ 00:29:04.427 22:16:43 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:29:04.427 * Looking for test storage... 00:29:04.427 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:04.427 22:16:43 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:04.427 22:16:43 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:29:04.427 22:16:43 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:04.427 22:16:43 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:04.427 22:16:43 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:04.427 22:16:43 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:04.427 22:16:43 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:04.427 22:16:43 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:04.427 22:16:43 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:04.427 22:16:43 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:04.427 22:16:43 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:04.427 22:16:43 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:04.427 22:16:43 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:29:04.427 22:16:43 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:29:04.427 22:16:43 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:04.427 22:16:43 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:04.427 22:16:43 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:04.427 22:16:43 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:04.427 22:16:43 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:04.427 22:16:43 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:04.427 22:16:43 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:04.427 22:16:43 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:04.427 22:16:43 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.427 22:16:43 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.427 22:16:43 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.427 22:16:43 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:29:04.427 22:16:43 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.427 22:16:43 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:29:04.427 22:16:43 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:04.427 22:16:43 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:04.427 22:16:43 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:04.427 22:16:43 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:04.427 22:16:43 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:04.427 22:16:43 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:04.427 22:16:43 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:04.427 22:16:43 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:04.427 22:16:43 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:29:04.427 22:16:43 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:29:04.427 22:16:43 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:29:04.427 22:16:43 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:29:04.427 22:16:43 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:29:04.427 22:16:43 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:04.427 22:16:43 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:04.427 22:16:43 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:04.427 22:16:43 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:04.427 22:16:43 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:04.427 22:16:43 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:04.427 22:16:43 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:04.427 22:16:43 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:04.427 22:16:43 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:04.427 22:16:43 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:04.427 22:16:43 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:29:04.427 22:16:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:10.997 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:10.997 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:10.997 Found net devices under 0000:af:00.0: cvl_0_0 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:10.997 Found net devices under 0000:af:00.1: cvl_0_1 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:10.997 22:16:49 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:10.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:10.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:29:10.997 00:29:10.997 --- 10.0.0.2 ping statistics --- 00:29:10.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:10.998 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:29:10.998 22:16:49 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:10.998 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:10.998 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:29:10.998 00:29:10.998 --- 10.0.0.1 ping statistics --- 00:29:10.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:10.998 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:29:10.998 22:16:49 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:10.998 22:16:49 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:29:10.998 22:16:49 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:29:10.998 22:16:49 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:14.290 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:29:14.290 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:29:14.290 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:29:14.290 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:29:14.290 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:29:14.290 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:29:14.290 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:29:14.290 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:29:14.290 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:29:14.290 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:29:14.290 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:29:14.290 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:29:14.290 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:29:14.290 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:29:14.290 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:29:14.290 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:29:14.290 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:29:14.290 22:16:53 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:14.290 22:16:53 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:14.290 22:16:53 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:14.290 22:16:53 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:14.290 22:16:53 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:14.290 22:16:53 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:14.290 22:16:53 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:29:14.290 22:16:53 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:29:14.290 22:16:53 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:14.290 22:16:53 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:14.290 22:16:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:14.290 22:16:53 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=2868542 00:29:14.290 22:16:53 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:29:14.290 22:16:53 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 2868542 00:29:14.290 22:16:53 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 2868542 ']' 00:29:14.290 22:16:53 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:14.290 22:16:53 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:14.290 22:16:53 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:14.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:14.290 22:16:53 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:14.290 22:16:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:14.290 [2024-07-24 22:16:53.384506] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:29:14.290 [2024-07-24 22:16:53.384548] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:14.290 EAL: No free 2048 kB hugepages reported on node 1 00:29:14.290 [2024-07-24 22:16:53.459419] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.549 [2024-07-24 22:16:53.531249] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:14.549 [2024-07-24 22:16:53.531288] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:14.549 [2024-07-24 22:16:53.531298] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:14.549 [2024-07-24 22:16:53.531306] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:14.549 [2024-07-24 22:16:53.531313] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:14.549 [2024-07-24 22:16:53.531336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:15.116 22:16:54 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:15.116 22:16:54 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:29:15.116 22:16:54 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:15.116 22:16:54 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:15.116 22:16:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:15.117 22:16:54 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:15.117 22:16:54 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:29:15.117 22:16:54 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:29:15.117 22:16:54 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.117 22:16:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:15.117 [2024-07-24 22:16:54.226132] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:15.117 22:16:54 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.117 22:16:54 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:29:15.117 22:16:54 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:15.117 22:16:54 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:15.117 22:16:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:15.117 ************************************ 00:29:15.117 START TEST fio_dif_1_default 00:29:15.117 ************************************ 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:15.117 bdev_null0 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:15.117 [2024-07-24 22:16:54.294443] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:15.117 { 00:29:15.117 "params": { 00:29:15.117 "name": "Nvme$subsystem", 00:29:15.117 "trtype": "$TEST_TRANSPORT", 00:29:15.117 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:15.117 "adrfam": "ipv4", 00:29:15.117 "trsvcid": "$NVMF_PORT", 00:29:15.117 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:15.117 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:15.117 "hdgst": ${hdgst:-false}, 00:29:15.117 "ddgst": ${ddgst:-false} 00:29:15.117 }, 00:29:15.117 "method": "bdev_nvme_attach_controller" 00:29:15.117 } 00:29:15.117 EOF 00:29:15.117 )") 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:29:15.117 22:16:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:15.117 "params": { 00:29:15.117 "name": "Nvme0", 00:29:15.117 "trtype": "tcp", 00:29:15.117 "traddr": "10.0.0.2", 00:29:15.117 "adrfam": "ipv4", 00:29:15.117 "trsvcid": "4420", 00:29:15.117 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:15.117 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:15.117 "hdgst": false, 00:29:15.117 "ddgst": false 00:29:15.117 }, 00:29:15.117 "method": "bdev_nvme_attach_controller" 00:29:15.117 }' 00:29:15.402 22:16:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:15.402 22:16:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:15.402 22:16:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:15.402 22:16:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:15.402 22:16:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:15.402 22:16:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:15.402 22:16:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:15.402 22:16:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:15.402 22:16:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:15.402 22:16:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:15.663 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:15.663 fio-3.35 00:29:15.663 Starting 1 thread 00:29:15.663 EAL: No free 2048 kB hugepages reported on node 1 00:29:27.946 00:29:27.946 filename0: (groupid=0, jobs=1): err= 0: pid=2868964: Wed Jul 24 22:17:05 2024 00:29:27.946 read: IOPS=96, BW=387KiB/s (396kB/s)(3872KiB/10006msec) 00:29:27.946 slat (nsec): min=5583, max=81569, avg=6182.11, stdev=3006.46 00:29:27.946 clat (usec): min=40885, max=46338, avg=41326.92, stdev=580.39 00:29:27.946 lat (usec): min=40891, max=46370, avg=41333.10, stdev=580.91 00:29:27.946 clat percentiles (usec): 00:29:27.946 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:29:27.946 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:29:27.946 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:29:27.946 | 99.00th=[42730], 99.50th=[43254], 99.90th=[46400], 99.95th=[46400], 00:29:27.946 | 99.99th=[46400] 00:29:27.946 bw ( KiB/s): min= 352, max= 416, per=99.49%, avg=385.60, stdev=12.61, samples=20 00:29:27.946 iops : min= 88, max= 104, avg=96.40, stdev= 3.15, samples=20 00:29:27.946 lat (msec) : 50=100.00% 00:29:27.946 cpu : usr=85.05%, sys=14.71%, ctx=16, majf=0, minf=273 00:29:27.946 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:27.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:27.946 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:27.946 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:27.946 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:27.946 00:29:27.946 Run status group 0 (all jobs): 00:29:27.946 READ: bw=387KiB/s (396kB/s), 387KiB/s-387KiB/s (396kB/s-396kB/s), io=3872KiB (3965kB), run=10006-10006msec 00:29:27.946 22:17:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:29:27.946 22:17:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:29:27.946 22:17:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:29:27.946 22:17:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:27.946 22:17:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:29:27.946 22:17:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:27.946 22:17:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.946 22:17:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:27.946 22:17:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.946 22:17:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:27.946 22:17:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.946 22:17:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:27.946 22:17:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.946 00:29:27.946 real 0m11.088s 00:29:27.946 user 0m16.974s 00:29:27.946 sys 0m1.800s 00:29:27.946 22:17:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:27.946 22:17:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:27.946 ************************************ 00:29:27.946 END TEST fio_dif_1_default 00:29:27.946 ************************************ 00:29:27.946 22:17:05 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:29:27.946 22:17:05 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:27.946 22:17:05 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:27.946 22:17:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:27.946 ************************************ 00:29:27.946 START TEST fio_dif_1_multi_subsystems 00:29:27.946 ************************************ 00:29:27.946 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:29:27.946 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:29:27.946 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:29:27.946 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:29:27.946 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:29:27.946 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:29:27.946 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:29:27.946 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:27.946 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.946 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:27.946 bdev_null0 00:29:27.946 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.946 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:27.946 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.946 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:27.946 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.946 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:27.946 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.946 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:27.946 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.946 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:27.946 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.946 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:27.946 [2024-07-24 22:17:05.473597] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:27.946 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.946 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:29:27.946 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:29:27.946 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:29:27.946 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:27.946 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.946 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:27.946 bdev_null1 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:27.947 { 00:29:27.947 "params": { 00:29:27.947 "name": "Nvme$subsystem", 00:29:27.947 "trtype": "$TEST_TRANSPORT", 00:29:27.947 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:27.947 "adrfam": "ipv4", 00:29:27.947 "trsvcid": "$NVMF_PORT", 00:29:27.947 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:27.947 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:27.947 "hdgst": ${hdgst:-false}, 00:29:27.947 "ddgst": ${ddgst:-false} 00:29:27.947 }, 00:29:27.947 "method": "bdev_nvme_attach_controller" 00:29:27.947 } 00:29:27.947 EOF 00:29:27.947 )") 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:27.947 { 00:29:27.947 "params": { 00:29:27.947 "name": "Nvme$subsystem", 00:29:27.947 "trtype": "$TEST_TRANSPORT", 00:29:27.947 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:27.947 "adrfam": "ipv4", 00:29:27.947 "trsvcid": "$NVMF_PORT", 00:29:27.947 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:27.947 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:27.947 "hdgst": ${hdgst:-false}, 00:29:27.947 "ddgst": ${ddgst:-false} 00:29:27.947 }, 00:29:27.947 "method": "bdev_nvme_attach_controller" 00:29:27.947 } 00:29:27.947 EOF 00:29:27.947 )") 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:27.947 "params": { 00:29:27.947 "name": "Nvme0", 00:29:27.947 "trtype": "tcp", 00:29:27.947 "traddr": "10.0.0.2", 00:29:27.947 "adrfam": "ipv4", 00:29:27.947 "trsvcid": "4420", 00:29:27.947 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:27.947 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:27.947 "hdgst": false, 00:29:27.947 "ddgst": false 00:29:27.947 }, 00:29:27.947 "method": "bdev_nvme_attach_controller" 00:29:27.947 },{ 00:29:27.947 "params": { 00:29:27.947 "name": "Nvme1", 00:29:27.947 "trtype": "tcp", 00:29:27.947 "traddr": "10.0.0.2", 00:29:27.947 "adrfam": "ipv4", 00:29:27.947 "trsvcid": "4420", 00:29:27.947 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:27.947 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:27.947 "hdgst": false, 00:29:27.947 "ddgst": false 00:29:27.947 }, 00:29:27.947 "method": "bdev_nvme_attach_controller" 00:29:27.947 }' 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:27.947 22:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:27.947 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:27.947 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:27.947 fio-3.35 00:29:27.947 Starting 2 threads 00:29:27.947 EAL: No free 2048 kB hugepages reported on node 1 00:29:37.924 00:29:37.924 filename0: (groupid=0, jobs=1): err= 0: pid=2870955: Wed Jul 24 22:17:16 2024 00:29:37.924 read: IOPS=188, BW=753KiB/s (771kB/s)(7536KiB/10004msec) 00:29:37.924 slat (nsec): min=5771, max=70099, avg=6934.46, stdev=2686.27 00:29:37.924 clat (usec): min=803, max=42490, avg=21218.79, stdev=20324.14 00:29:37.924 lat (usec): min=809, max=42520, avg=21225.73, stdev=20323.42 00:29:37.924 clat percentiles (usec): 00:29:37.924 | 1.00th=[ 807], 5.00th=[ 824], 10.00th=[ 832], 20.00th=[ 840], 00:29:37.924 | 30.00th=[ 873], 40.00th=[ 889], 50.00th=[41157], 60.00th=[41157], 00:29:37.924 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:29:37.924 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:29:37.924 | 99.99th=[42730] 00:29:37.924 bw ( KiB/s): min= 704, max= 768, per=50.07%, avg=754.53, stdev=24.59, samples=19 00:29:37.924 iops : min= 176, max= 192, avg=188.63, stdev= 6.15, samples=19 00:29:37.924 lat (usec) : 1000=49.68% 00:29:37.924 lat (msec) : 2=0.21%, 50=50.11% 00:29:37.924 cpu : usr=93.07%, sys=6.68%, ctx=9, majf=0, minf=209 00:29:37.924 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:37.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:37.924 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:37.924 issued rwts: total=1884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:37.924 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:37.924 filename1: (groupid=0, jobs=1): err= 0: pid=2870956: Wed Jul 24 22:17:16 2024 00:29:37.924 read: IOPS=188, BW=753KiB/s (771kB/s)(7536KiB/10009msec) 00:29:37.924 slat (nsec): min=5776, max=32451, avg=6850.45, stdev=2154.79 00:29:37.924 clat (usec): min=775, max=42866, avg=21229.77, stdev=20376.35 00:29:37.924 lat (usec): min=781, max=42875, avg=21236.62, stdev=20375.69 00:29:37.924 clat percentiles (usec): 00:29:37.924 | 1.00th=[ 783], 5.00th=[ 791], 10.00th=[ 799], 20.00th=[ 807], 00:29:37.924 | 30.00th=[ 816], 40.00th=[ 824], 50.00th=[41157], 60.00th=[41157], 00:29:37.924 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:29:37.924 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:29:37.924 | 99.99th=[42730] 00:29:37.924 bw ( KiB/s): min= 704, max= 768, per=49.94%, avg=752.00, stdev=28.43, samples=20 00:29:37.924 iops : min= 176, max= 192, avg=188.00, stdev= 7.11, samples=20 00:29:37.924 lat (usec) : 1000=49.04% 00:29:37.924 lat (msec) : 2=0.85%, 50=50.11% 00:29:37.924 cpu : usr=93.30%, sys=6.45%, ctx=13, majf=0, minf=73 00:29:37.924 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:37.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:37.924 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:37.924 issued rwts: total=1884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:37.924 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:37.924 00:29:37.924 Run status group 0 (all jobs): 00:29:37.924 READ: bw=1506KiB/s (1542kB/s), 753KiB/s-753KiB/s (771kB/s-771kB/s), io=14.7MiB (15.4MB), run=10004-10009msec 00:29:37.924 22:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:29:37.924 22:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:29:37.924 22:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:29:37.924 22:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:37.924 22:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:29:37.924 22:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:37.924 22:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.924 22:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:37.924 22:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.924 22:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:37.924 22:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.924 22:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:37.924 22:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.924 22:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:29:37.924 22:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:37.924 22:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:29:37.924 22:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:37.924 22:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.924 22:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:37.924 22:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.924 22:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:37.924 22:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.924 22:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:37.924 22:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.924 00:29:37.924 real 0m11.468s 00:29:37.924 user 0m28.095s 00:29:37.924 sys 0m1.730s 00:29:37.924 22:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:37.924 22:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:37.924 ************************************ 00:29:37.924 END TEST fio_dif_1_multi_subsystems 00:29:37.924 ************************************ 00:29:37.924 22:17:16 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:29:37.924 22:17:16 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:37.924 22:17:16 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:37.924 22:17:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:37.924 ************************************ 00:29:37.924 START TEST fio_dif_rand_params 00:29:37.924 ************************************ 00:29:37.924 22:17:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:29:37.924 22:17:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:29:37.924 22:17:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:29:37.924 22:17:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:29:37.924 22:17:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:29:37.924 22:17:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:29:37.924 22:17:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:29:37.925 22:17:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:29:37.925 22:17:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:29:37.925 22:17:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:37.925 22:17:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:37.925 22:17:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:37.925 22:17:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:37.925 22:17:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:37.925 bdev_null0 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:37.925 [2024-07-24 22:17:17.028949] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:37.925 { 00:29:37.925 "params": { 00:29:37.925 "name": "Nvme$subsystem", 00:29:37.925 "trtype": "$TEST_TRANSPORT", 00:29:37.925 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:37.925 "adrfam": "ipv4", 00:29:37.925 "trsvcid": "$NVMF_PORT", 00:29:37.925 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:37.925 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:37.925 "hdgst": ${hdgst:-false}, 00:29:37.925 "ddgst": ${ddgst:-false} 00:29:37.925 }, 00:29:37.925 "method": "bdev_nvme_attach_controller" 00:29:37.925 } 00:29:37.925 EOF 00:29:37.925 )") 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:37.925 "params": { 00:29:37.925 "name": "Nvme0", 00:29:37.925 "trtype": "tcp", 00:29:37.925 "traddr": "10.0.0.2", 00:29:37.925 "adrfam": "ipv4", 00:29:37.925 "trsvcid": "4420", 00:29:37.925 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:37.925 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:37.925 "hdgst": false, 00:29:37.925 "ddgst": false 00:29:37.925 }, 00:29:37.925 "method": "bdev_nvme_attach_controller" 00:29:37.925 }' 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:37.925 22:17:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:38.491 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:29:38.491 ... 00:29:38.491 fio-3.35 00:29:38.491 Starting 3 threads 00:29:38.491 EAL: No free 2048 kB hugepages reported on node 1 00:29:43.768 00:29:43.768 filename0: (groupid=0, jobs=1): err= 0: pid=2872974: Wed Jul 24 22:17:22 2024 00:29:43.768 read: IOPS=264, BW=33.0MiB/s (34.6MB/s)(165MiB/5006msec) 00:29:43.768 slat (nsec): min=5710, max=44980, avg=9955.65, stdev=4475.49 00:29:43.768 clat (usec): min=3889, max=92906, avg=11337.00, stdev=12790.83 00:29:43.768 lat (usec): min=3896, max=92917, avg=11346.96, stdev=12790.89 00:29:43.768 clat percentiles (usec): 00:29:43.769 | 1.00th=[ 4359], 5.00th=[ 4752], 10.00th=[ 5080], 20.00th=[ 5735], 00:29:43.769 | 30.00th=[ 6456], 40.00th=[ 6915], 50.00th=[ 7308], 60.00th=[ 7767], 00:29:43.769 | 70.00th=[ 8586], 80.00th=[ 9503], 90.00th=[11731], 95.00th=[49021], 00:29:43.769 | 99.00th=[51119], 99.50th=[51643], 99.90th=[89654], 99.95th=[92799], 00:29:43.769 | 99.99th=[92799] 00:29:43.769 bw ( KiB/s): min=21504, max=49664, per=33.46%, avg=33783.80, stdev=8448.77, samples=10 00:29:43.769 iops : min= 168, max= 388, avg=263.90, stdev=65.97, samples=10 00:29:43.769 lat (msec) : 4=0.08%, 10=85.19%, 20=5.14%, 50=6.65%, 100=2.95% 00:29:43.769 cpu : usr=92.13%, sys=7.49%, ctx=8, majf=0, minf=82 00:29:43.769 IO depths : 1=2.0%, 2=98.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:43.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:43.769 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:43.769 issued rwts: total=1323,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:43.769 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:43.769 filename0: (groupid=0, jobs=1): err= 0: pid=2872975: Wed Jul 24 22:17:22 2024 00:29:43.769 read: IOPS=279, BW=34.9MiB/s (36.6MB/s)(175MiB/5003msec) 00:29:43.769 slat (usec): min=5, max=122, avg= 9.22, stdev= 4.76 00:29:43.769 clat (usec): min=3980, max=52593, avg=10728.40, stdev=11675.80 00:29:43.769 lat (usec): min=3987, max=52601, avg=10737.62, stdev=11676.04 00:29:43.769 clat percentiles (usec): 00:29:43.769 | 1.00th=[ 4359], 5.00th=[ 4948], 10.00th=[ 5211], 20.00th=[ 5669], 00:29:43.769 | 30.00th=[ 6390], 40.00th=[ 6849], 50.00th=[ 7242], 60.00th=[ 7767], 00:29:43.769 | 70.00th=[ 8586], 80.00th=[ 9503], 90.00th=[11076], 95.00th=[49021], 00:29:43.769 | 99.00th=[51119], 99.50th=[51119], 99.90th=[52691], 99.95th=[52691], 00:29:43.769 | 99.99th=[52691] 00:29:43.769 bw ( KiB/s): min=24576, max=51456, per=35.92%, avg=36266.67, stdev=8833.85, samples=9 00:29:43.769 iops : min= 192, max= 402, avg=283.33, stdev=69.01, samples=9 00:29:43.769 lat (msec) : 4=0.07%, 10=85.04%, 20=6.73%, 50=5.23%, 100=2.93% 00:29:43.769 cpu : usr=91.82%, sys=7.82%, ctx=9, majf=0, minf=71 00:29:43.769 IO depths : 1=2.6%, 2=97.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:43.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:43.769 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:43.769 issued rwts: total=1397,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:43.769 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:43.769 filename0: (groupid=0, jobs=1): err= 0: pid=2872976: Wed Jul 24 22:17:22 2024 00:29:43.769 read: IOPS=249, BW=31.1MiB/s (32.6MB/s)(157MiB/5040msec) 00:29:43.769 slat (nsec): min=5721, max=27526, avg=9541.25, stdev=3608.23 00:29:43.769 clat (usec): min=4033, max=92773, avg=12031.19, stdev=13505.51 00:29:43.769 lat (usec): min=4040, max=92782, avg=12040.74, stdev=13505.78 00:29:43.769 clat percentiles (usec): 00:29:43.769 | 1.00th=[ 4228], 5.00th=[ 4686], 10.00th=[ 5145], 20.00th=[ 6128], 00:29:43.769 | 30.00th=[ 6652], 40.00th=[ 7046], 50.00th=[ 7439], 60.00th=[ 8160], 00:29:43.769 | 70.00th=[ 8979], 80.00th=[ 9765], 90.00th=[47449], 95.00th=[50070], 00:29:43.769 | 99.00th=[51119], 99.50th=[52167], 99.90th=[88605], 99.95th=[92799], 00:29:43.769 | 99.99th=[92799] 00:29:43.769 bw ( KiB/s): min=23296, max=42496, per=31.75%, avg=32056.50, stdev=8196.12, samples=10 00:29:43.769 iops : min= 182, max= 332, avg=250.40, stdev=64.06, samples=10 00:29:43.769 lat (msec) : 10=82.63%, 20=6.53%, 50=6.45%, 100=4.38% 00:29:43.769 cpu : usr=92.66%, sys=6.99%, ctx=7, majf=0, minf=113 00:29:43.769 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:43.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:43.769 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:43.769 issued rwts: total=1255,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:43.769 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:43.769 00:29:43.769 Run status group 0 (all jobs): 00:29:43.769 READ: bw=98.6MiB/s (103MB/s), 31.1MiB/s-34.9MiB/s (32.6MB/s-36.6MB/s), io=497MiB (521MB), run=5003-5040msec 00:29:44.029 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:29:44.029 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:44.029 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:44.029 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:44.029 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:44.029 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:44.029 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.029 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:44.029 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.029 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:44.029 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.029 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:44.029 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.029 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:29:44.029 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:29:44.029 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:29:44.029 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:29:44.029 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:29:44.029 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:29:44.029 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:29:44.029 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:44.029 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:44.029 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:44.029 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:44.029 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:29:44.029 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.029 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:44.029 bdev_null0 00:29:44.029 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.029 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:44.029 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.029 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:44.029 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.029 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:44.030 [2024-07-24 22:17:23.142360] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:44.030 bdev_null1 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:44.030 bdev_null2 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:44.030 { 00:29:44.030 "params": { 00:29:44.030 "name": "Nvme$subsystem", 00:29:44.030 "trtype": "$TEST_TRANSPORT", 00:29:44.030 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:44.030 "adrfam": "ipv4", 00:29:44.030 "trsvcid": "$NVMF_PORT", 00:29:44.030 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:44.030 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:44.030 "hdgst": ${hdgst:-false}, 00:29:44.030 "ddgst": ${ddgst:-false} 00:29:44.030 }, 00:29:44.030 "method": "bdev_nvme_attach_controller" 00:29:44.030 } 00:29:44.030 EOF 00:29:44.030 )") 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:44.030 { 00:29:44.030 "params": { 00:29:44.030 "name": "Nvme$subsystem", 00:29:44.030 "trtype": "$TEST_TRANSPORT", 00:29:44.030 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:44.030 "adrfam": "ipv4", 00:29:44.030 "trsvcid": "$NVMF_PORT", 00:29:44.030 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:44.030 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:44.030 "hdgst": ${hdgst:-false}, 00:29:44.030 "ddgst": ${ddgst:-false} 00:29:44.030 }, 00:29:44.030 "method": "bdev_nvme_attach_controller" 00:29:44.030 } 00:29:44.030 EOF 00:29:44.030 )") 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:44.030 { 00:29:44.030 "params": { 00:29:44.030 "name": "Nvme$subsystem", 00:29:44.030 "trtype": "$TEST_TRANSPORT", 00:29:44.030 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:44.030 "adrfam": "ipv4", 00:29:44.030 "trsvcid": "$NVMF_PORT", 00:29:44.030 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:44.030 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:44.030 "hdgst": ${hdgst:-false}, 00:29:44.030 "ddgst": ${ddgst:-false} 00:29:44.030 }, 00:29:44.030 "method": "bdev_nvme_attach_controller" 00:29:44.030 } 00:29:44.030 EOF 00:29:44.030 )") 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:44.030 22:17:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:44.030 "params": { 00:29:44.030 "name": "Nvme0", 00:29:44.030 "trtype": "tcp", 00:29:44.030 "traddr": "10.0.0.2", 00:29:44.030 "adrfam": "ipv4", 00:29:44.030 "trsvcid": "4420", 00:29:44.030 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:44.030 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:44.030 "hdgst": false, 00:29:44.030 "ddgst": false 00:29:44.030 }, 00:29:44.030 "method": "bdev_nvme_attach_controller" 00:29:44.030 },{ 00:29:44.030 "params": { 00:29:44.030 "name": "Nvme1", 00:29:44.030 "trtype": "tcp", 00:29:44.030 "traddr": "10.0.0.2", 00:29:44.030 "adrfam": "ipv4", 00:29:44.031 "trsvcid": "4420", 00:29:44.031 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:44.031 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:44.031 "hdgst": false, 00:29:44.031 "ddgst": false 00:29:44.031 }, 00:29:44.031 "method": "bdev_nvme_attach_controller" 00:29:44.031 },{ 00:29:44.031 "params": { 00:29:44.031 "name": "Nvme2", 00:29:44.031 "trtype": "tcp", 00:29:44.031 "traddr": "10.0.0.2", 00:29:44.031 "adrfam": "ipv4", 00:29:44.031 "trsvcid": "4420", 00:29:44.031 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:44.031 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:44.031 "hdgst": false, 00:29:44.031 "ddgst": false 00:29:44.031 }, 00:29:44.031 "method": "bdev_nvme_attach_controller" 00:29:44.031 }' 00:29:44.312 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:44.312 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:44.312 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:44.312 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:44.313 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:44.313 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:44.313 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:44.313 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:44.313 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:44.313 22:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:44.574 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:44.574 ... 00:29:44.574 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:44.574 ... 00:29:44.574 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:44.574 ... 00:29:44.574 fio-3.35 00:29:44.574 Starting 24 threads 00:29:44.574 EAL: No free 2048 kB hugepages reported on node 1 00:29:56.764 00:29:56.765 filename0: (groupid=0, jobs=1): err= 0: pid=2874188: Wed Jul 24 22:17:34 2024 00:29:56.765 read: IOPS=656, BW=2627KiB/s (2690kB/s)(25.7MiB/10019msec) 00:29:56.765 slat (nsec): min=3009, max=62893, avg=11231.58, stdev=5124.26 00:29:56.765 clat (usec): min=3658, max=51171, avg=24284.43, stdev=6304.50 00:29:56.765 lat (usec): min=3666, max=51178, avg=24295.66, stdev=6305.10 00:29:56.765 clat percentiles (usec): 00:29:56.765 | 1.00th=[ 5407], 5.00th=[ 8979], 10.00th=[15401], 20.00th=[22414], 00:29:56.765 | 30.00th=[24249], 40.00th=[25822], 50.00th=[26084], 60.00th=[26346], 00:29:56.765 | 70.00th=[26608], 80.00th=[26870], 90.00th=[27657], 95.00th=[30016], 00:29:56.765 | 99.00th=[46924], 99.50th=[47973], 99.90th=[49546], 99.95th=[50070], 00:29:56.765 | 99.99th=[51119] 00:29:56.765 bw ( KiB/s): min= 2336, max= 3088, per=4.52%, avg=2625.60, stdev=215.61, samples=20 00:29:56.765 iops : min= 584, max= 772, avg=656.40, stdev=53.90, samples=20 00:29:56.765 lat (msec) : 4=0.24%, 10=5.49%, 20=9.36%, 50=84.89%, 100=0.02% 00:29:56.765 cpu : usr=96.86%, sys=2.78%, ctx=21, majf=0, minf=62 00:29:56.765 IO depths : 1=2.7%, 2=5.4%, 4=14.8%, 8=66.7%, 16=10.4%, 32=0.0%, >=64=0.0% 00:29:56.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.765 complete : 0=0.0%, 4=91.5%, 8=3.4%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.765 issued rwts: total=6580,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.765 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:56.765 filename0: (groupid=0, jobs=1): err= 0: pid=2874189: Wed Jul 24 22:17:34 2024 00:29:56.765 read: IOPS=629, BW=2517KiB/s (2577kB/s)(24.6MiB/10017msec) 00:29:56.765 slat (nsec): min=3929, max=53091, avg=13635.99, stdev=6069.27 00:29:56.765 clat (usec): min=3743, max=49028, avg=25321.27, stdev=4221.51 00:29:56.765 lat (usec): min=3750, max=49059, avg=25334.91, stdev=4222.67 00:29:56.765 clat percentiles (usec): 00:29:56.765 | 1.00th=[ 7373], 5.00th=[14877], 10.00th=[22152], 20.00th=[25560], 00:29:56.765 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26346], 60.00th=[26608], 00:29:56.765 | 70.00th=[26608], 80.00th=[26870], 90.00th=[27132], 95.00th=[27919], 00:29:56.765 | 99.00th=[31851], 99.50th=[35914], 99.90th=[47973], 99.95th=[49021], 00:29:56.765 | 99.99th=[49021] 00:29:56.765 bw ( KiB/s): min= 2320, max= 2920, per=4.33%, avg=2514.80, stdev=152.86, samples=20 00:29:56.765 iops : min= 580, max= 730, avg=628.70, stdev=38.21, samples=20 00:29:56.765 lat (msec) : 4=0.22%, 10=1.52%, 20=6.76%, 50=91.50% 00:29:56.765 cpu : usr=96.54%, sys=3.10%, ctx=41, majf=0, minf=68 00:29:56.765 IO depths : 1=4.5%, 2=9.3%, 4=20.4%, 8=57.5%, 16=8.3%, 32=0.0%, >=64=0.0% 00:29:56.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.765 complete : 0=0.0%, 4=93.0%, 8=1.5%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.765 issued rwts: total=6303,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.765 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:56.765 filename0: (groupid=0, jobs=1): err= 0: pid=2874191: Wed Jul 24 22:17:34 2024 00:29:56.765 read: IOPS=617, BW=2469KiB/s (2528kB/s)(24.1MiB/10008msec) 00:29:56.765 slat (nsec): min=6404, max=57544, avg=13156.65, stdev=5846.43 00:29:56.765 clat (usec): min=5425, max=47302, avg=25812.72, stdev=3309.68 00:29:56.765 lat (usec): min=5432, max=47311, avg=25825.88, stdev=3310.20 00:29:56.765 clat percentiles (usec): 00:29:56.765 | 1.00th=[ 8160], 5.00th=[19792], 10.00th=[25035], 20.00th=[25822], 00:29:56.765 | 30.00th=[26084], 40.00th=[26084], 50.00th=[26346], 60.00th=[26608], 00:29:56.765 | 70.00th=[26608], 80.00th=[26870], 90.00th=[27132], 95.00th=[27657], 00:29:56.765 | 99.00th=[32375], 99.50th=[32637], 99.90th=[41157], 99.95th=[47449], 00:29:56.765 | 99.99th=[47449] 00:29:56.765 bw ( KiB/s): min= 2304, max= 2848, per=4.24%, avg=2465.05, stdev=142.58, samples=20 00:29:56.765 iops : min= 576, max= 712, avg=616.25, stdev=35.64, samples=20 00:29:56.765 lat (msec) : 10=1.21%, 20=4.19%, 50=94.59% 00:29:56.765 cpu : usr=96.77%, sys=2.87%, ctx=18, majf=0, minf=64 00:29:56.765 IO depths : 1=5.1%, 2=10.3%, 4=21.9%, 8=55.2%, 16=7.5%, 32=0.0%, >=64=0.0% 00:29:56.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.765 complete : 0=0.0%, 4=93.3%, 8=1.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.765 issued rwts: total=6178,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.765 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:56.765 filename0: (groupid=0, jobs=1): err= 0: pid=2874192: Wed Jul 24 22:17:34 2024 00:29:56.765 read: IOPS=603, BW=2415KiB/s (2473kB/s)(23.6MiB/10009msec) 00:29:56.765 slat (nsec): min=6491, max=64917, avg=19313.20, stdev=8361.97 00:29:56.765 clat (usec): min=7393, max=35606, avg=26347.47, stdev=1685.53 00:29:56.765 lat (usec): min=7407, max=35619, avg=26366.78, stdev=1685.31 00:29:56.765 clat percentiles (usec): 00:29:56.765 | 1.00th=[21103], 5.00th=[25035], 10.00th=[25560], 20.00th=[26084], 00:29:56.765 | 30.00th=[26084], 40.00th=[26346], 50.00th=[26346], 60.00th=[26608], 00:29:56.765 | 70.00th=[26608], 80.00th=[26870], 90.00th=[27132], 95.00th=[27657], 00:29:56.765 | 99.00th=[31065], 99.50th=[32637], 99.90th=[32900], 99.95th=[32900], 00:29:56.765 | 99.99th=[35390] 00:29:56.765 bw ( KiB/s): min= 2304, max= 2536, per=4.15%, avg=2411.20, stdev=55.15, samples=20 00:29:56.765 iops : min= 576, max= 634, avg=602.80, stdev=13.79, samples=20 00:29:56.765 lat (msec) : 10=0.20%, 20=0.63%, 50=99.17% 00:29:56.765 cpu : usr=96.61%, sys=3.01%, ctx=34, majf=0, minf=64 00:29:56.765 IO depths : 1=4.7%, 2=9.6%, 4=21.2%, 8=56.4%, 16=8.1%, 32=0.0%, >=64=0.0% 00:29:56.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.765 complete : 0=0.0%, 4=93.3%, 8=1.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.765 issued rwts: total=6044,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.765 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:56.765 filename0: (groupid=0, jobs=1): err= 0: pid=2874193: Wed Jul 24 22:17:34 2024 00:29:56.765 read: IOPS=597, BW=2388KiB/s (2446kB/s)(23.3MiB/10009msec) 00:29:56.765 slat (nsec): min=6279, max=72542, avg=16514.90, stdev=10555.43 00:29:56.765 clat (usec): min=4634, max=54052, avg=26698.98, stdev=4037.67 00:29:56.765 lat (usec): min=4646, max=54069, avg=26715.49, stdev=4037.69 00:29:56.765 clat percentiles (usec): 00:29:56.765 | 1.00th=[ 9372], 5.00th=[23725], 10.00th=[25560], 20.00th=[25822], 00:29:56.765 | 30.00th=[26084], 40.00th=[26346], 50.00th=[26346], 60.00th=[26608], 00:29:56.765 | 70.00th=[26870], 80.00th=[27132], 90.00th=[27657], 95.00th=[31851], 00:29:56.765 | 99.00th=[44827], 99.50th=[48497], 99.90th=[50070], 99.95th=[53740], 00:29:56.765 | 99.99th=[54264] 00:29:56.765 bw ( KiB/s): min= 2176, max= 2432, per=4.10%, avg=2381.21, stdev=63.09, samples=19 00:29:56.765 iops : min= 544, max= 608, avg=595.26, stdev=15.77, samples=19 00:29:56.765 lat (msec) : 10=1.10%, 20=1.84%, 50=96.97%, 100=0.08% 00:29:56.765 cpu : usr=96.70%, sys=2.92%, ctx=15, majf=0, minf=95 00:29:56.765 IO depths : 1=1.2%, 2=2.7%, 4=9.8%, 8=72.3%, 16=14.0%, 32=0.0%, >=64=0.0% 00:29:56.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.765 complete : 0=0.0%, 4=91.2%, 8=5.7%, 16=3.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.765 issued rwts: total=5976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.765 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:56.765 filename0: (groupid=0, jobs=1): err= 0: pid=2874194: Wed Jul 24 22:17:34 2024 00:29:56.765 read: IOPS=600, BW=2404KiB/s (2461kB/s)(23.5MiB/10011msec) 00:29:56.765 slat (nsec): min=6457, max=76011, avg=25924.31, stdev=10516.61 00:29:56.765 clat (usec): min=14789, max=38824, avg=26416.90, stdev=1446.92 00:29:56.765 lat (usec): min=14796, max=38840, avg=26442.82, stdev=1447.40 00:29:56.765 clat percentiles (usec): 00:29:56.765 | 1.00th=[20055], 5.00th=[25297], 10.00th=[25822], 20.00th=[26084], 00:29:56.765 | 30.00th=[26084], 40.00th=[26346], 50.00th=[26346], 60.00th=[26608], 00:29:56.765 | 70.00th=[26608], 80.00th=[26870], 90.00th=[27132], 95.00th=[27395], 00:29:56.765 | 99.00th=[31589], 99.50th=[33817], 99.90th=[36439], 99.95th=[38536], 00:29:56.765 | 99.99th=[39060] 00:29:56.765 bw ( KiB/s): min= 2304, max= 2432, per=4.13%, avg=2400.20, stdev=56.52, samples=20 00:29:56.765 iops : min= 576, max= 608, avg=600.05, stdev=14.13, samples=20 00:29:56.765 lat (msec) : 20=0.93%, 50=99.07% 00:29:56.765 cpu : usr=96.66%, sys=2.94%, ctx=17, majf=0, minf=55 00:29:56.765 IO depths : 1=4.9%, 2=10.2%, 4=22.5%, 8=54.7%, 16=7.8%, 32=0.0%, >=64=0.0% 00:29:56.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.765 complete : 0=0.0%, 4=93.5%, 8=0.8%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.765 issued rwts: total=6016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.765 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:56.765 filename0: (groupid=0, jobs=1): err= 0: pid=2874195: Wed Jul 24 22:17:34 2024 00:29:56.765 read: IOPS=601, BW=2406KiB/s (2463kB/s)(23.5MiB/10003msec) 00:29:56.765 slat (nsec): min=6913, max=76622, avg=27857.47, stdev=10418.65 00:29:56.765 clat (usec): min=17570, max=28592, avg=26364.41, stdev=716.87 00:29:56.765 lat (usec): min=17580, max=28625, avg=26392.26, stdev=717.06 00:29:56.765 clat percentiles (usec): 00:29:56.765 | 1.00th=[25297], 5.00th=[25560], 10.00th=[25822], 20.00th=[25822], 00:29:56.765 | 30.00th=[26084], 40.00th=[26346], 50.00th=[26346], 60.00th=[26608], 00:29:56.765 | 70.00th=[26608], 80.00th=[26870], 90.00th=[27132], 95.00th=[27395], 00:29:56.765 | 99.00th=[27657], 99.50th=[27919], 99.90th=[28443], 99.95th=[28443], 00:29:56.765 | 99.99th=[28705] 00:29:56.765 bw ( KiB/s): min= 2304, max= 2432, per=4.14%, avg=2405.05, stdev=53.61, samples=19 00:29:56.765 iops : min= 576, max= 608, avg=601.26, stdev=13.40, samples=19 00:29:56.765 lat (msec) : 20=0.27%, 50=99.73% 00:29:56.765 cpu : usr=97.18%, sys=2.46%, ctx=15, majf=0, minf=50 00:29:56.765 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:56.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.765 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.765 issued rwts: total=6016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.765 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:56.765 filename0: (groupid=0, jobs=1): err= 0: pid=2874196: Wed Jul 24 22:17:34 2024 00:29:56.765 read: IOPS=587, BW=2351KiB/s (2407kB/s)(23.0MiB/10002msec) 00:29:56.765 slat (nsec): min=4763, max=76004, avg=20181.91, stdev=12437.56 00:29:56.766 clat (usec): min=4972, max=49217, avg=27088.52, stdev=4804.43 00:29:56.766 lat (usec): min=4979, max=49231, avg=27108.70, stdev=4803.94 00:29:56.766 clat percentiles (usec): 00:29:56.766 | 1.00th=[ 9110], 5.00th=[23462], 10.00th=[25560], 20.00th=[25822], 00:29:56.766 | 30.00th=[26084], 40.00th=[26346], 50.00th=[26346], 60.00th=[26608], 00:29:56.766 | 70.00th=[26870], 80.00th=[27395], 90.00th=[30540], 95.00th=[36439], 00:29:56.766 | 99.00th=[45876], 99.50th=[47449], 99.90th=[49021], 99.95th=[49021], 00:29:56.766 | 99.99th=[49021] 00:29:56.766 bw ( KiB/s): min= 2176, max= 2448, per=4.02%, avg=2336.21, stdev=79.65, samples=19 00:29:56.766 iops : min= 544, max= 612, avg=584.05, stdev=19.91, samples=19 00:29:56.766 lat (msec) : 10=1.22%, 20=2.33%, 50=96.44% 00:29:56.766 cpu : usr=97.13%, sys=2.48%, ctx=19, majf=0, minf=69 00:29:56.766 IO depths : 1=1.4%, 2=3.7%, 4=14.9%, 8=67.7%, 16=12.3%, 32=0.0%, >=64=0.0% 00:29:56.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.766 complete : 0=0.0%, 4=92.2%, 8=3.2%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.766 issued rwts: total=5878,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.766 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:56.766 filename1: (groupid=0, jobs=1): err= 0: pid=2874197: Wed Jul 24 22:17:34 2024 00:29:56.766 read: IOPS=598, BW=2392KiB/s (2450kB/s)(23.4MiB/10003msec) 00:29:56.766 slat (nsec): min=4185, max=74355, avg=27443.26, stdev=12092.04 00:29:56.766 clat (usec): min=8307, max=45117, avg=26520.82, stdev=2578.90 00:29:56.766 lat (usec): min=8319, max=45129, avg=26548.27, stdev=2577.90 00:29:56.766 clat percentiles (usec): 00:29:56.766 | 1.00th=[18220], 5.00th=[25560], 10.00th=[25560], 20.00th=[25822], 00:29:56.766 | 30.00th=[26084], 40.00th=[26346], 50.00th=[26346], 60.00th=[26608], 00:29:56.766 | 70.00th=[26608], 80.00th=[26870], 90.00th=[27132], 95.00th=[27919], 00:29:56.766 | 99.00th=[41681], 99.50th=[42206], 99.90th=[44827], 99.95th=[45351], 00:29:56.766 | 99.99th=[45351] 00:29:56.766 bw ( KiB/s): min= 2048, max= 2448, per=4.09%, avg=2377.68, stdev=98.36, samples=19 00:29:56.766 iops : min= 512, max= 612, avg=594.42, stdev=24.59, samples=19 00:29:56.766 lat (msec) : 10=0.32%, 20=1.22%, 50=98.46% 00:29:56.766 cpu : usr=96.67%, sys=2.97%, ctx=17, majf=0, minf=61 00:29:56.766 IO depths : 1=5.4%, 2=10.9%, 4=22.7%, 8=53.7%, 16=7.3%, 32=0.0%, >=64=0.0% 00:29:56.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.766 complete : 0=0.0%, 4=93.6%, 8=0.8%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.766 issued rwts: total=5983,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.766 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:56.766 filename1: (groupid=0, jobs=1): err= 0: pid=2874198: Wed Jul 24 22:17:34 2024 00:29:56.766 read: IOPS=600, BW=2404KiB/s (2462kB/s)(23.5MiB/10010msec) 00:29:56.766 slat (nsec): min=6761, max=70256, avg=22771.14, stdev=8733.87 00:29:56.766 clat (usec): min=15721, max=42373, avg=26437.61, stdev=881.39 00:29:56.766 lat (usec): min=15729, max=42390, avg=26460.38, stdev=880.79 00:29:56.766 clat percentiles (usec): 00:29:56.766 | 1.00th=[25035], 5.00th=[25560], 10.00th=[25822], 20.00th=[26084], 00:29:56.766 | 30.00th=[26084], 40.00th=[26346], 50.00th=[26346], 60.00th=[26608], 00:29:56.766 | 70.00th=[26608], 80.00th=[26870], 90.00th=[27132], 95.00th=[27395], 00:29:56.766 | 99.00th=[28181], 99.50th=[29230], 99.90th=[32637], 99.95th=[32900], 00:29:56.766 | 99.99th=[42206] 00:29:56.766 bw ( KiB/s): min= 2304, max= 2432, per=4.13%, avg=2400.00, stdev=56.87, samples=20 00:29:56.766 iops : min= 576, max= 608, avg=600.00, stdev=14.22, samples=20 00:29:56.766 lat (msec) : 20=0.27%, 50=99.73% 00:29:56.766 cpu : usr=97.06%, sys=2.58%, ctx=19, majf=0, minf=61 00:29:56.766 IO depths : 1=6.1%, 2=12.2%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:29:56.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.766 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.766 issued rwts: total=6016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.766 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:56.766 filename1: (groupid=0, jobs=1): err= 0: pid=2874199: Wed Jul 24 22:17:34 2024 00:29:56.766 read: IOPS=598, BW=2393KiB/s (2451kB/s)(23.4MiB/10004msec) 00:29:56.766 slat (nsec): min=5904, max=73747, avg=24947.92, stdev=11907.37 00:29:56.766 clat (usec): min=6938, max=52366, avg=26545.41, stdev=2844.60 00:29:56.766 lat (usec): min=6946, max=52382, avg=26570.36, stdev=2844.30 00:29:56.766 clat percentiles (usec): 00:29:56.766 | 1.00th=[17957], 5.00th=[25297], 10.00th=[25560], 20.00th=[25822], 00:29:56.766 | 30.00th=[26084], 40.00th=[26346], 50.00th=[26346], 60.00th=[26608], 00:29:56.766 | 70.00th=[26608], 80.00th=[26870], 90.00th=[27395], 95.00th=[28181], 00:29:56.766 | 99.00th=[40633], 99.50th=[45876], 99.90th=[50070], 99.95th=[52167], 00:29:56.766 | 99.99th=[52167] 00:29:56.766 bw ( KiB/s): min= 2256, max= 2560, per=4.11%, avg=2389.63, stdev=71.09, samples=19 00:29:56.766 iops : min= 564, max= 640, avg=597.37, stdev=17.83, samples=19 00:29:56.766 lat (msec) : 10=0.30%, 20=0.72%, 50=98.86%, 100=0.12% 00:29:56.766 cpu : usr=96.80%, sys=2.82%, ctx=19, majf=0, minf=39 00:29:56.766 IO depths : 1=4.2%, 2=8.8%, 4=20.8%, 8=57.8%, 16=8.3%, 32=0.0%, >=64=0.0% 00:29:56.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.766 complete : 0=0.0%, 4=92.9%, 8=1.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.766 issued rwts: total=5986,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.766 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:56.766 filename1: (groupid=0, jobs=1): err= 0: pid=2874200: Wed Jul 24 22:17:34 2024 00:29:56.766 read: IOPS=593, BW=2373KiB/s (2430kB/s)(23.2MiB/10002msec) 00:29:56.766 slat (nsec): min=6411, max=72007, avg=21074.27, stdev=12779.45 00:29:56.766 clat (usec): min=3748, max=48363, avg=26817.88, stdev=4070.19 00:29:56.766 lat (usec): min=3762, max=48378, avg=26838.96, stdev=4070.13 00:29:56.766 clat percentiles (usec): 00:29:56.766 | 1.00th=[12256], 5.00th=[23462], 10.00th=[25560], 20.00th=[25822], 00:29:56.766 | 30.00th=[26084], 40.00th=[26346], 50.00th=[26346], 60.00th=[26608], 00:29:56.766 | 70.00th=[26870], 80.00th=[27132], 90.00th=[28443], 95.00th=[34866], 00:29:56.766 | 99.00th=[42730], 99.50th=[45351], 99.90th=[46924], 99.95th=[46924], 00:29:56.766 | 99.99th=[48497] 00:29:56.766 bw ( KiB/s): min= 2208, max= 2448, per=4.06%, avg=2356.84, stdev=75.62, samples=19 00:29:56.766 iops : min= 552, max= 612, avg=589.21, stdev=18.90, samples=19 00:29:56.766 lat (msec) : 4=0.02%, 10=0.47%, 20=2.73%, 50=96.78% 00:29:56.766 cpu : usr=96.63%, sys=2.97%, ctx=19, majf=0, minf=66 00:29:56.766 IO depths : 1=2.0%, 2=5.5%, 4=17.0%, 8=64.1%, 16=11.4%, 32=0.0%, >=64=0.0% 00:29:56.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.766 complete : 0=0.0%, 4=92.5%, 8=2.4%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.766 issued rwts: total=5933,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.766 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:56.766 filename1: (groupid=0, jobs=1): err= 0: pid=2874201: Wed Jul 24 22:17:34 2024 00:29:56.766 read: IOPS=600, BW=2404KiB/s (2461kB/s)(23.5MiB/10011msec) 00:29:56.766 slat (nsec): min=7062, max=74707, avg=26377.64, stdev=9794.18 00:29:56.766 clat (usec): min=17712, max=35931, avg=26404.50, stdev=791.50 00:29:56.766 lat (usec): min=17737, max=35947, avg=26430.88, stdev=791.12 00:29:56.766 clat percentiles (usec): 00:29:56.766 | 1.00th=[25297], 5.00th=[25560], 10.00th=[25822], 20.00th=[26084], 00:29:56.766 | 30.00th=[26084], 40.00th=[26346], 50.00th=[26346], 60.00th=[26608], 00:29:56.766 | 70.00th=[26608], 80.00th=[26870], 90.00th=[27132], 95.00th=[27395], 00:29:56.766 | 99.00th=[27919], 99.50th=[28181], 99.90th=[32900], 99.95th=[32900], 00:29:56.766 | 99.99th=[35914] 00:29:56.766 bw ( KiB/s): min= 2304, max= 2432, per=4.13%, avg=2400.00, stdev=56.87, samples=20 00:29:56.766 iops : min= 576, max= 608, avg=600.00, stdev=14.22, samples=20 00:29:56.766 lat (msec) : 20=0.27%, 50=99.73% 00:29:56.766 cpu : usr=97.04%, sys=2.60%, ctx=21, majf=0, minf=44 00:29:56.766 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:56.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.766 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.766 issued rwts: total=6016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.766 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:56.766 filename1: (groupid=0, jobs=1): err= 0: pid=2874202: Wed Jul 24 22:17:34 2024 00:29:56.766 read: IOPS=600, BW=2402KiB/s (2459kB/s)(23.5MiB/10002msec) 00:29:56.766 slat (nsec): min=6634, max=74916, avg=27147.29, stdev=13215.49 00:29:56.766 clat (usec): min=6809, max=47412, avg=26422.91, stdev=3409.33 00:29:56.766 lat (usec): min=6818, max=47432, avg=26450.06, stdev=3409.06 00:29:56.766 clat percentiles (usec): 00:29:56.766 | 1.00th=[12125], 5.00th=[25035], 10.00th=[25560], 20.00th=[25822], 00:29:56.766 | 30.00th=[26084], 40.00th=[26084], 50.00th=[26346], 60.00th=[26608], 00:29:56.766 | 70.00th=[26608], 80.00th=[26870], 90.00th=[27132], 95.00th=[27919], 00:29:56.766 | 99.00th=[43779], 99.50th=[45876], 99.90th=[46924], 99.95th=[46924], 00:29:56.766 | 99.99th=[47449] 00:29:56.766 bw ( KiB/s): min= 2256, max= 2504, per=4.11%, avg=2387.16, stdev=69.70, samples=19 00:29:56.766 iops : min= 564, max= 626, avg=596.79, stdev=17.42, samples=19 00:29:56.766 lat (msec) : 10=0.88%, 20=1.78%, 50=97.34% 00:29:56.766 cpu : usr=96.88%, sys=2.71%, ctx=16, majf=0, minf=52 00:29:56.766 IO depths : 1=4.8%, 2=9.6%, 4=20.7%, 8=56.5%, 16=8.4%, 32=0.0%, >=64=0.0% 00:29:56.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.766 complete : 0=0.0%, 4=93.1%, 8=1.8%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.766 issued rwts: total=6005,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.767 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:56.767 filename1: (groupid=0, jobs=1): err= 0: pid=2874203: Wed Jul 24 22:17:34 2024 00:29:56.767 read: IOPS=629, BW=2520KiB/s (2580kB/s)(24.6MiB/10011msec) 00:29:56.767 slat (nsec): min=6374, max=55573, avg=13667.96, stdev=6726.77 00:29:56.767 clat (usec): min=4503, max=49519, avg=25294.72, stdev=4652.74 00:29:56.767 lat (usec): min=4516, max=49535, avg=25308.39, stdev=4653.73 00:29:56.767 clat percentiles (usec): 00:29:56.767 | 1.00th=[ 6521], 5.00th=[14877], 10.00th=[21365], 20.00th=[25560], 00:29:56.767 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26346], 60.00th=[26346], 00:29:56.767 | 70.00th=[26608], 80.00th=[26870], 90.00th=[27132], 95.00th=[28443], 00:29:56.767 | 99.00th=[36439], 99.50th=[42730], 99.90th=[49021], 99.95th=[49546], 00:29:56.767 | 99.99th=[49546] 00:29:56.767 bw ( KiB/s): min= 2304, max= 2704, per=4.33%, avg=2516.00, stdev=100.81, samples=20 00:29:56.767 iops : min= 576, max= 676, avg=629.00, stdev=25.20, samples=20 00:29:56.767 lat (msec) : 10=2.52%, 20=6.77%, 50=90.71% 00:29:56.767 cpu : usr=97.03%, sys=2.59%, ctx=20, majf=0, minf=58 00:29:56.767 IO depths : 1=3.9%, 2=8.0%, 4=19.5%, 8=59.8%, 16=8.8%, 32=0.0%, >=64=0.0% 00:29:56.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.767 complete : 0=0.0%, 4=92.8%, 8=1.7%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.767 issued rwts: total=6306,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.767 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:56.767 filename1: (groupid=0, jobs=1): err= 0: pid=2874204: Wed Jul 24 22:17:34 2024 00:29:56.767 read: IOPS=604, BW=2419KiB/s (2477kB/s)(23.6MiB/10009msec) 00:29:56.767 slat (nsec): min=6395, max=69232, avg=17393.46, stdev=7136.92 00:29:56.767 clat (usec): min=6209, max=44132, avg=26316.66, stdev=2354.34 00:29:56.767 lat (usec): min=6223, max=44146, avg=26334.05, stdev=2354.86 00:29:56.767 clat percentiles (usec): 00:29:56.767 | 1.00th=[17957], 5.00th=[23987], 10.00th=[25560], 20.00th=[25822], 00:29:56.767 | 30.00th=[26084], 40.00th=[26346], 50.00th=[26346], 60.00th=[26608], 00:29:56.767 | 70.00th=[26608], 80.00th=[26870], 90.00th=[27395], 95.00th=[28705], 00:29:56.767 | 99.00th=[32375], 99.50th=[34866], 99.90th=[43779], 99.95th=[44303], 00:29:56.767 | 99.99th=[44303] 00:29:56.767 bw ( KiB/s): min= 2304, max= 2528, per=4.16%, avg=2415.40, stdev=64.58, samples=20 00:29:56.767 iops : min= 576, max= 632, avg=603.85, stdev=16.14, samples=20 00:29:56.767 lat (msec) : 10=0.43%, 20=1.12%, 50=98.45% 00:29:56.767 cpu : usr=96.56%, sys=3.05%, ctx=24, majf=0, minf=54 00:29:56.767 IO depths : 1=4.6%, 2=9.5%, 4=21.9%, 8=56.0%, 16=8.0%, 32=0.0%, >=64=0.0% 00:29:56.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.767 complete : 0=0.0%, 4=93.4%, 8=0.8%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.767 issued rwts: total=6054,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.767 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:56.767 filename2: (groupid=0, jobs=1): err= 0: pid=2874206: Wed Jul 24 22:17:34 2024 00:29:56.767 read: IOPS=600, BW=2402KiB/s (2459kB/s)(23.5MiB/10002msec) 00:29:56.767 slat (nsec): min=6388, max=91820, avg=23525.95, stdev=12606.38 00:29:56.767 clat (usec): min=5892, max=48373, avg=26452.77, stdev=3872.67 00:29:56.767 lat (usec): min=5899, max=48394, avg=26476.30, stdev=3873.01 00:29:56.767 clat percentiles (usec): 00:29:56.767 | 1.00th=[ 9241], 5.00th=[24511], 10.00th=[25560], 20.00th=[25822], 00:29:56.767 | 30.00th=[26084], 40.00th=[26346], 50.00th=[26346], 60.00th=[26608], 00:29:56.767 | 70.00th=[26608], 80.00th=[26870], 90.00th=[27395], 95.00th=[28443], 00:29:56.767 | 99.00th=[43779], 99.50th=[44827], 99.90th=[46400], 99.95th=[46400], 00:29:56.767 | 99.99th=[48497] 00:29:56.767 bw ( KiB/s): min= 2228, max= 2456, per=4.11%, avg=2387.16, stdev=65.38, samples=19 00:29:56.767 iops : min= 557, max= 614, avg=596.79, stdev=16.35, samples=19 00:29:56.767 lat (msec) : 10=1.93%, 20=0.90%, 50=97.17% 00:29:56.767 cpu : usr=96.98%, sys=2.63%, ctx=20, majf=0, minf=49 00:29:56.767 IO depths : 1=4.3%, 2=8.5%, 4=19.6%, 8=58.4%, 16=9.2%, 32=0.0%, >=64=0.0% 00:29:56.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.767 complete : 0=0.0%, 4=93.0%, 8=2.2%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.767 issued rwts: total=6005,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.767 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:56.767 filename2: (groupid=0, jobs=1): err= 0: pid=2874207: Wed Jul 24 22:17:34 2024 00:29:56.767 read: IOPS=599, BW=2399KiB/s (2456kB/s)(23.4MiB/10005msec) 00:29:56.767 slat (nsec): min=6128, max=76398, avg=28924.76, stdev=11491.18 00:29:56.767 clat (usec): min=9117, max=47979, avg=26429.40, stdev=2067.94 00:29:56.767 lat (usec): min=9131, max=47996, avg=26458.33, stdev=2067.07 00:29:56.767 clat percentiles (usec): 00:29:56.767 | 1.00th=[21627], 5.00th=[25560], 10.00th=[25822], 20.00th=[25822], 00:29:56.767 | 30.00th=[26084], 40.00th=[26346], 50.00th=[26346], 60.00th=[26608], 00:29:56.767 | 70.00th=[26608], 80.00th=[26870], 90.00th=[27132], 95.00th=[27395], 00:29:56.767 | 99.00th=[33424], 99.50th=[41681], 99.90th=[46400], 99.95th=[47973], 00:29:56.767 | 99.99th=[47973] 00:29:56.767 bw ( KiB/s): min= 2176, max= 2432, per=4.10%, avg=2384.84, stdev=77.19, samples=19 00:29:56.767 iops : min= 544, max= 608, avg=596.21, stdev=19.30, samples=19 00:29:56.767 lat (msec) : 10=0.27%, 20=0.55%, 50=99.18% 00:29:56.767 cpu : usr=96.93%, sys=2.72%, ctx=15, majf=0, minf=47 00:29:56.767 IO depths : 1=5.9%, 2=11.8%, 4=24.1%, 8=51.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:29:56.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.767 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.767 issued rwts: total=6000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.767 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:56.767 filename2: (groupid=0, jobs=1): err= 0: pid=2874208: Wed Jul 24 22:17:34 2024 00:29:56.767 read: IOPS=588, BW=2356KiB/s (2412kB/s)(23.0MiB/10002msec) 00:29:56.767 slat (nsec): min=6384, max=76977, avg=19019.13, stdev=12450.88 00:29:56.767 clat (usec): min=4946, max=50732, avg=27064.24, stdev=4753.59 00:29:56.767 lat (usec): min=4959, max=50754, avg=27083.25, stdev=4753.26 00:29:56.767 clat percentiles (usec): 00:29:56.767 | 1.00th=[ 9241], 5.00th=[24249], 10.00th=[25560], 20.00th=[25822], 00:29:56.767 | 30.00th=[26084], 40.00th=[26346], 50.00th=[26608], 60.00th=[26608], 00:29:56.767 | 70.00th=[26870], 80.00th=[27132], 90.00th=[29492], 95.00th=[35914], 00:29:56.767 | 99.00th=[45351], 99.50th=[46400], 99.90th=[50594], 99.95th=[50594], 00:29:56.767 | 99.99th=[50594] 00:29:56.767 bw ( KiB/s): min= 2176, max= 2464, per=4.02%, avg=2338.74, stdev=72.44, samples=19 00:29:56.767 iops : min= 544, max= 616, avg=584.68, stdev=18.11, samples=19 00:29:56.767 lat (msec) : 10=1.34%, 20=1.87%, 50=96.67%, 100=0.12% 00:29:56.767 cpu : usr=96.96%, sys=2.65%, ctx=18, majf=0, minf=64 00:29:56.767 IO depths : 1=0.4%, 2=1.0%, 4=10.0%, 8=74.6%, 16=14.0%, 32=0.0%, >=64=0.0% 00:29:56.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.767 complete : 0=0.0%, 4=91.4%, 8=4.7%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.767 issued rwts: total=5890,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.767 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:56.767 filename2: (groupid=0, jobs=1): err= 0: pid=2874209: Wed Jul 24 22:17:34 2024 00:29:56.767 read: IOPS=620, BW=2482KiB/s (2541kB/s)(24.3MiB/10009msec) 00:29:56.767 slat (nsec): min=6401, max=54854, avg=15307.28, stdev=7259.43 00:29:56.767 clat (usec): min=5440, max=47136, avg=25670.34, stdev=3271.05 00:29:56.767 lat (usec): min=5448, max=47150, avg=25685.65, stdev=3272.49 00:29:56.767 clat percentiles (usec): 00:29:56.767 | 1.00th=[13173], 5.00th=[17171], 10.00th=[23200], 20.00th=[25822], 00:29:56.767 | 30.00th=[26084], 40.00th=[26346], 50.00th=[26346], 60.00th=[26608], 00:29:56.767 | 70.00th=[26608], 80.00th=[26870], 90.00th=[27132], 95.00th=[27395], 00:29:56.767 | 99.00th=[30016], 99.50th=[35914], 99.90th=[45876], 99.95th=[46400], 00:29:56.767 | 99.99th=[46924] 00:29:56.767 bw ( KiB/s): min= 2292, max= 3232, per=4.26%, avg=2477.80, stdev=207.86, samples=20 00:29:56.767 iops : min= 573, max= 808, avg=619.45, stdev=51.97, samples=20 00:29:56.767 lat (msec) : 10=0.39%, 20=6.44%, 50=93.17% 00:29:56.767 cpu : usr=96.48%, sys=3.15%, ctx=44, majf=0, minf=66 00:29:56.767 IO depths : 1=4.6%, 2=9.5%, 4=20.5%, 8=57.0%, 16=8.4%, 32=0.0%, >=64=0.0% 00:29:56.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.767 complete : 0=0.0%, 4=93.1%, 8=1.6%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.767 issued rwts: total=6210,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.767 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:56.767 filename2: (groupid=0, jobs=1): err= 0: pid=2874210: Wed Jul 24 22:17:34 2024 00:29:56.767 read: IOPS=600, BW=2404KiB/s (2461kB/s)(23.5MiB/10011msec) 00:29:56.767 slat (nsec): min=6953, max=98715, avg=30088.87, stdev=12345.10 00:29:56.767 clat (usec): min=9504, max=41729, avg=26347.94, stdev=994.20 00:29:56.768 lat (usec): min=9512, max=41758, avg=26378.03, stdev=994.64 00:29:56.768 clat percentiles (usec): 00:29:56.768 | 1.00th=[25035], 5.00th=[25560], 10.00th=[25560], 20.00th=[25822], 00:29:56.768 | 30.00th=[26084], 40.00th=[26084], 50.00th=[26346], 60.00th=[26346], 00:29:56.768 | 70.00th=[26608], 80.00th=[26870], 90.00th=[27132], 95.00th=[27395], 00:29:56.768 | 99.00th=[27919], 99.50th=[28181], 99.90th=[32900], 99.95th=[41681], 00:29:56.768 | 99.99th=[41681] 00:29:56.768 bw ( KiB/s): min= 2304, max= 2432, per=4.13%, avg=2400.00, stdev=56.87, samples=20 00:29:56.768 iops : min= 576, max= 608, avg=600.00, stdev=14.22, samples=20 00:29:56.768 lat (msec) : 10=0.07%, 20=0.27%, 50=99.67% 00:29:56.768 cpu : usr=97.23%, sys=2.37%, ctx=63, majf=0, minf=48 00:29:56.768 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:56.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.768 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.768 issued rwts: total=6016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.768 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:56.768 filename2: (groupid=0, jobs=1): err= 0: pid=2874211: Wed Jul 24 22:17:34 2024 00:29:56.768 read: IOPS=600, BW=2402KiB/s (2460kB/s)(23.5MiB/10003msec) 00:29:56.768 slat (nsec): min=6115, max=73606, avg=21396.15, stdev=11056.12 00:29:56.768 clat (usec): min=5978, max=51467, avg=26482.68, stdev=3090.09 00:29:56.768 lat (usec): min=5985, max=51483, avg=26504.08, stdev=3090.56 00:29:56.768 clat percentiles (usec): 00:29:56.768 | 1.00th=[15139], 5.00th=[24773], 10.00th=[25560], 20.00th=[25822], 00:29:56.768 | 30.00th=[26084], 40.00th=[26346], 50.00th=[26346], 60.00th=[26608], 00:29:56.768 | 70.00th=[26870], 80.00th=[27132], 90.00th=[27395], 95.00th=[28443], 00:29:56.768 | 99.00th=[42730], 99.50th=[44827], 99.90th=[47449], 99.95th=[51119], 00:29:56.768 | 99.99th=[51643] 00:29:56.768 bw ( KiB/s): min= 2196, max= 2512, per=4.12%, avg=2394.89, stdev=77.63, samples=19 00:29:56.768 iops : min= 549, max= 628, avg=598.68, stdev=19.41, samples=19 00:29:56.768 lat (msec) : 10=0.47%, 20=1.56%, 50=97.89%, 100=0.08% 00:29:56.768 cpu : usr=96.84%, sys=2.77%, ctx=23, majf=0, minf=49 00:29:56.768 IO depths : 1=2.8%, 2=6.2%, 4=17.2%, 8=63.2%, 16=10.6%, 32=0.0%, >=64=0.0% 00:29:56.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.768 complete : 0=0.0%, 4=92.6%, 8=2.4%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.768 issued rwts: total=6008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.768 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:56.768 filename2: (groupid=0, jobs=1): err= 0: pid=2874212: Wed Jul 24 22:17:34 2024 00:29:56.768 read: IOPS=600, BW=2404KiB/s (2461kB/s)(23.5MiB/10011msec) 00:29:56.768 slat (nsec): min=6670, max=76505, avg=23482.87, stdev=9074.18 00:29:56.768 clat (usec): min=13230, max=39213, avg=26430.54, stdev=1031.11 00:29:56.768 lat (usec): min=13238, max=39274, avg=26454.02, stdev=1031.19 00:29:56.768 clat percentiles (usec): 00:29:56.768 | 1.00th=[25035], 5.00th=[25560], 10.00th=[25822], 20.00th=[26084], 00:29:56.768 | 30.00th=[26084], 40.00th=[26346], 50.00th=[26346], 60.00th=[26608], 00:29:56.768 | 70.00th=[26608], 80.00th=[26870], 90.00th=[27132], 95.00th=[27395], 00:29:56.768 | 99.00th=[27919], 99.50th=[30802], 99.90th=[33817], 99.95th=[39060], 00:29:56.768 | 99.99th=[39060] 00:29:56.768 bw ( KiB/s): min= 2304, max= 2432, per=4.13%, avg=2400.00, stdev=56.87, samples=20 00:29:56.768 iops : min= 576, max= 608, avg=600.00, stdev=14.22, samples=20 00:29:56.768 lat (msec) : 20=0.37%, 50=99.63% 00:29:56.768 cpu : usr=96.55%, sys=3.08%, ctx=28, majf=0, minf=59 00:29:56.768 IO depths : 1=6.2%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:56.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.768 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.768 issued rwts: total=6016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.768 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:56.768 filename2: (groupid=0, jobs=1): err= 0: pid=2874213: Wed Jul 24 22:17:34 2024 00:29:56.768 read: IOPS=607, BW=2431KiB/s (2490kB/s)(23.8MiB/10011msec) 00:29:56.768 slat (nsec): min=6389, max=70863, avg=18471.65, stdev=8482.21 00:29:56.768 clat (usec): min=5416, max=47807, avg=26177.87, stdev=2945.79 00:29:56.768 lat (usec): min=5430, max=47854, avg=26196.34, stdev=2946.64 00:29:56.768 clat percentiles (usec): 00:29:56.768 | 1.00th=[ 9372], 5.00th=[23725], 10.00th=[25297], 20.00th=[25822], 00:29:56.768 | 30.00th=[26084], 40.00th=[26084], 50.00th=[26346], 60.00th=[26608], 00:29:56.768 | 70.00th=[26608], 80.00th=[26870], 90.00th=[27395], 95.00th=[28181], 00:29:56.768 | 99.00th=[32637], 99.50th=[37487], 99.90th=[47449], 99.95th=[47973], 00:29:56.768 | 99.99th=[47973] 00:29:56.768 bw ( KiB/s): min= 2304, max= 2536, per=4.18%, avg=2429.20, stdev=62.90, samples=20 00:29:56.768 iops : min= 576, max= 634, avg=607.30, stdev=15.72, samples=20 00:29:56.768 lat (msec) : 10=1.20%, 20=1.35%, 50=97.45% 00:29:56.768 cpu : usr=96.79%, sys=2.80%, ctx=20, majf=0, minf=62 00:29:56.768 IO depths : 1=3.9%, 2=8.4%, 4=21.0%, 8=58.0%, 16=8.7%, 32=0.0%, >=64=0.0% 00:29:56.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.768 complete : 0=0.0%, 4=93.2%, 8=1.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.768 issued rwts: total=6085,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.768 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:56.768 00:29:56.768 Run status group 0 (all jobs): 00:29:56.768 READ: bw=56.7MiB/s (59.5MB/s), 2351KiB/s-2627KiB/s (2407kB/s-2690kB/s), io=568MiB (596MB), run=10002-10019msec 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:56.768 bdev_null0 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:56.768 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:56.769 [2024-07-24 22:17:34.823438] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:56.769 bdev_null1 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:56.769 { 00:29:56.769 "params": { 00:29:56.769 "name": "Nvme$subsystem", 00:29:56.769 "trtype": "$TEST_TRANSPORT", 00:29:56.769 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:56.769 "adrfam": "ipv4", 00:29:56.769 "trsvcid": "$NVMF_PORT", 00:29:56.769 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:56.769 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:56.769 "hdgst": ${hdgst:-false}, 00:29:56.769 "ddgst": ${ddgst:-false} 00:29:56.769 }, 00:29:56.769 "method": "bdev_nvme_attach_controller" 00:29:56.769 } 00:29:56.769 EOF 00:29:56.769 )") 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:56.769 { 00:29:56.769 "params": { 00:29:56.769 "name": "Nvme$subsystem", 00:29:56.769 "trtype": "$TEST_TRANSPORT", 00:29:56.769 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:56.769 "adrfam": "ipv4", 00:29:56.769 "trsvcid": "$NVMF_PORT", 00:29:56.769 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:56.769 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:56.769 "hdgst": ${hdgst:-false}, 00:29:56.769 "ddgst": ${ddgst:-false} 00:29:56.769 }, 00:29:56.769 "method": "bdev_nvme_attach_controller" 00:29:56.769 } 00:29:56.769 EOF 00:29:56.769 )") 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:56.769 "params": { 00:29:56.769 "name": "Nvme0", 00:29:56.769 "trtype": "tcp", 00:29:56.769 "traddr": "10.0.0.2", 00:29:56.769 "adrfam": "ipv4", 00:29:56.769 "trsvcid": "4420", 00:29:56.769 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:56.769 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:56.769 "hdgst": false, 00:29:56.769 "ddgst": false 00:29:56.769 }, 00:29:56.769 "method": "bdev_nvme_attach_controller" 00:29:56.769 },{ 00:29:56.769 "params": { 00:29:56.769 "name": "Nvme1", 00:29:56.769 "trtype": "tcp", 00:29:56.769 "traddr": "10.0.0.2", 00:29:56.769 "adrfam": "ipv4", 00:29:56.769 "trsvcid": "4420", 00:29:56.769 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:56.769 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:56.769 "hdgst": false, 00:29:56.769 "ddgst": false 00:29:56.769 }, 00:29:56.769 "method": "bdev_nvme_attach_controller" 00:29:56.769 }' 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:56.769 22:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:56.769 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:29:56.769 ... 00:29:56.769 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:29:56.769 ... 00:29:56.769 fio-3.35 00:29:56.769 Starting 4 threads 00:29:56.769 EAL: No free 2048 kB hugepages reported on node 1 00:30:02.066 00:30:02.066 filename0: (groupid=0, jobs=1): err= 0: pid=2876223: Wed Jul 24 22:17:41 2024 00:30:02.066 read: IOPS=2731, BW=21.3MiB/s (22.4MB/s)(107MiB/5002msec) 00:30:02.066 slat (usec): min=2, max=110, avg= 8.53, stdev= 2.95 00:30:02.066 clat (usec): min=1489, max=49356, avg=2905.72, stdev=1207.89 00:30:02.066 lat (usec): min=1501, max=49366, avg=2914.25, stdev=1207.77 00:30:02.066 clat percentiles (usec): 00:30:02.066 | 1.00th=[ 2024], 5.00th=[ 2278], 10.00th=[ 2409], 20.00th=[ 2573], 00:30:02.066 | 30.00th=[ 2671], 40.00th=[ 2737], 50.00th=[ 2835], 60.00th=[ 2900], 00:30:02.066 | 70.00th=[ 2933], 80.00th=[ 3163], 90.00th=[ 3490], 95.00th=[ 3818], 00:30:02.066 | 99.00th=[ 4228], 99.50th=[ 4228], 99.90th=[ 4555], 99.95th=[49546], 00:30:02.066 | 99.99th=[49546] 00:30:02.066 bw ( KiB/s): min=20336, max=22656, per=25.15%, avg=21854.40, stdev=651.04, samples=10 00:30:02.066 iops : min= 2542, max= 2832, avg=2731.80, stdev=81.38, samples=10 00:30:02.066 lat (msec) : 2=0.94%, 4=95.47%, 10=3.54%, 50=0.06% 00:30:02.066 cpu : usr=93.64%, sys=6.06%, ctx=9, majf=0, minf=9 00:30:02.066 IO depths : 1=0.3%, 2=1.7%, 4=68.4%, 8=29.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:02.066 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.066 complete : 0=0.0%, 4=94.2%, 8=5.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.066 issued rwts: total=13661,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:02.066 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:02.066 filename0: (groupid=0, jobs=1): err= 0: pid=2876224: Wed Jul 24 22:17:41 2024 00:30:02.066 read: IOPS=2739, BW=21.4MiB/s (22.4MB/s)(107MiB/5005msec) 00:30:02.066 slat (usec): min=5, max=161, avg= 8.56, stdev= 3.10 00:30:02.066 clat (usec): min=1224, max=9619, avg=2896.94, stdev=492.29 00:30:02.066 lat (usec): min=1230, max=9625, avg=2905.51, stdev=492.14 00:30:02.066 clat percentiles (usec): 00:30:02.066 | 1.00th=[ 1680], 5.00th=[ 2245], 10.00th=[ 2409], 20.00th=[ 2573], 00:30:02.066 | 30.00th=[ 2671], 40.00th=[ 2737], 50.00th=[ 2835], 60.00th=[ 2900], 00:30:02.066 | 70.00th=[ 2966], 80.00th=[ 3228], 90.00th=[ 3589], 95.00th=[ 3884], 00:30:02.066 | 99.00th=[ 4293], 99.50th=[ 4424], 99.90th=[ 4883], 99.95th=[ 5932], 00:30:02.066 | 99.99th=[ 9634] 00:30:02.066 bw ( KiB/s): min=21456, max=22400, per=25.23%, avg=21926.40, stdev=355.31, samples=10 00:30:02.066 iops : min= 2682, max= 2800, avg=2740.80, stdev=44.41, samples=10 00:30:02.066 lat (msec) : 2=1.81%, 4=94.65%, 10=3.54% 00:30:02.066 cpu : usr=93.41%, sys=6.29%, ctx=10, majf=0, minf=9 00:30:02.066 IO depths : 1=0.2%, 2=1.6%, 4=67.4%, 8=30.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:02.066 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.066 complete : 0=0.0%, 4=95.1%, 8=4.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.066 issued rwts: total=13712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:02.066 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:02.066 filename1: (groupid=0, jobs=1): err= 0: pid=2876225: Wed Jul 24 22:17:41 2024 00:30:02.066 read: IOPS=2638, BW=20.6MiB/s (21.6MB/s)(103MiB/5004msec) 00:30:02.066 slat (usec): min=5, max=115, avg= 8.47, stdev= 2.97 00:30:02.066 clat (usec): min=1524, max=45489, avg=3009.24, stdev=1141.50 00:30:02.066 lat (usec): min=1533, max=45513, avg=3017.71, stdev=1141.51 00:30:02.066 clat percentiles (usec): 00:30:02.066 | 1.00th=[ 2147], 5.00th=[ 2376], 10.00th=[ 2507], 20.00th=[ 2638], 00:30:02.066 | 30.00th=[ 2737], 40.00th=[ 2835], 50.00th=[ 2900], 60.00th=[ 2933], 00:30:02.067 | 70.00th=[ 3163], 80.00th=[ 3294], 90.00th=[ 3621], 95.00th=[ 3949], 00:30:02.067 | 99.00th=[ 4293], 99.50th=[ 4490], 99.90th=[ 5800], 99.95th=[45351], 00:30:02.067 | 99.99th=[45351] 00:30:02.067 bw ( KiB/s): min=19527, max=21632, per=24.30%, avg=21119.10, stdev=591.63, samples=10 00:30:02.067 iops : min= 2440, max= 2704, avg=2639.80, stdev=74.22, samples=10 00:30:02.067 lat (msec) : 2=0.26%, 4=95.30%, 10=4.38%, 50=0.06% 00:30:02.067 cpu : usr=92.86%, sys=6.82%, ctx=7, majf=0, minf=11 00:30:02.067 IO depths : 1=0.2%, 2=1.7%, 4=67.5%, 8=30.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:02.067 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.067 complete : 0=0.0%, 4=95.0%, 8=5.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.067 issued rwts: total=13205,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:02.067 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:02.067 filename1: (groupid=0, jobs=1): err= 0: pid=2876227: Wed Jul 24 22:17:41 2024 00:30:02.067 read: IOPS=2754, BW=21.5MiB/s (22.6MB/s)(108MiB/5005msec) 00:30:02.067 slat (nsec): min=5801, max=92615, avg=8610.75, stdev=2936.53 00:30:02.067 clat (usec): min=1208, max=7143, avg=2881.39, stdev=474.77 00:30:02.067 lat (usec): min=1214, max=7150, avg=2890.00, stdev=474.83 00:30:02.067 clat percentiles (usec): 00:30:02.067 | 1.00th=[ 1713], 5.00th=[ 2245], 10.00th=[ 2376], 20.00th=[ 2573], 00:30:02.067 | 30.00th=[ 2671], 40.00th=[ 2737], 50.00th=[ 2835], 60.00th=[ 2900], 00:30:02.067 | 70.00th=[ 2966], 80.00th=[ 3195], 90.00th=[ 3490], 95.00th=[ 3785], 00:30:02.067 | 99.00th=[ 4228], 99.50th=[ 4424], 99.90th=[ 5080], 99.95th=[ 5538], 00:30:02.067 | 99.99th=[ 5538] 00:30:02.067 bw ( KiB/s): min=21248, max=23184, per=25.37%, avg=22041.60, stdev=563.28, samples=10 00:30:02.067 iops : min= 2656, max= 2898, avg=2755.20, stdev=70.41, samples=10 00:30:02.067 lat (msec) : 2=2.09%, 4=94.90%, 10=3.01% 00:30:02.067 cpu : usr=93.49%, sys=6.22%, ctx=7, majf=0, minf=9 00:30:02.067 IO depths : 1=0.3%, 2=1.6%, 4=69.1%, 8=29.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:02.067 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.067 complete : 0=0.0%, 4=93.7%, 8=6.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.067 issued rwts: total=13784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:02.067 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:02.067 00:30:02.067 Run status group 0 (all jobs): 00:30:02.067 READ: bw=84.9MiB/s (89.0MB/s), 20.6MiB/s-21.5MiB/s (21.6MB/s-22.6MB/s), io=425MiB (445MB), run=5002-5005msec 00:30:02.067 22:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:30:02.067 22:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:02.067 22:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:02.067 22:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:02.067 22:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:02.067 22:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:02.067 22:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.067 22:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:02.067 22:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.067 22:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:02.067 22:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.067 22:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:02.067 22:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.067 22:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:02.067 22:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:02.067 22:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:02.067 22:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:02.067 22:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.067 22:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:02.067 22:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.067 22:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:02.067 22:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.067 22:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:02.067 22:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.067 00:30:02.067 real 0m24.218s 00:30:02.067 user 4m52.594s 00:30:02.067 sys 0m10.189s 00:30:02.067 22:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:02.067 22:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:02.067 ************************************ 00:30:02.067 END TEST fio_dif_rand_params 00:30:02.067 ************************************ 00:30:02.067 22:17:41 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:30:02.067 22:17:41 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:02.067 22:17:41 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:02.067 22:17:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:02.325 ************************************ 00:30:02.325 START TEST fio_dif_digest 00:30:02.325 ************************************ 00:30:02.325 22:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:30:02.325 22:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:30:02.325 22:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:30:02.325 22:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:30:02.325 22:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:30:02.325 22:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:30:02.325 22:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:30:02.325 22:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:30:02.325 22:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:30:02.325 22:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:30:02.325 22:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:30:02.325 22:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:30:02.325 22:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:30:02.325 22:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:30:02.325 22:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:30:02.325 22:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:30:02.325 22:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:02.325 22:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.325 22:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:02.325 bdev_null0 00:30:02.325 22:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.325 22:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:02.325 22:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.325 22:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:02.325 22:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.325 22:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:02.325 22:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.325 22:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:02.325 22:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.325 22:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:02.325 22:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.325 22:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:02.325 [2024-07-24 22:17:41.330313] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:02.325 22:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.325 22:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:30:02.325 22:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:30:02.325 22:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:02.325 22:17:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:30:02.326 22:17:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:30:02.326 22:17:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:02.326 22:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:02.326 22:17:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:02.326 { 00:30:02.326 "params": { 00:30:02.326 "name": "Nvme$subsystem", 00:30:02.326 "trtype": "$TEST_TRANSPORT", 00:30:02.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:02.326 "adrfam": "ipv4", 00:30:02.326 "trsvcid": "$NVMF_PORT", 00:30:02.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:02.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:02.326 "hdgst": ${hdgst:-false}, 00:30:02.326 "ddgst": ${ddgst:-false} 00:30:02.326 }, 00:30:02.326 "method": "bdev_nvme_attach_controller" 00:30:02.326 } 00:30:02.326 EOF 00:30:02.326 )") 00:30:02.326 22:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:02.326 22:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:30:02.326 22:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:02.326 22:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:30:02.326 22:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:02.326 22:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:30:02.326 22:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:02.326 22:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:02.326 22:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:30:02.326 22:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:02.326 22:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:02.326 22:17:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:30:02.326 22:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:02.326 22:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:30:02.326 22:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:30:02.326 22:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:30:02.326 22:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:02.326 22:17:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:30:02.326 22:17:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:30:02.326 22:17:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:02.326 "params": { 00:30:02.326 "name": "Nvme0", 00:30:02.326 "trtype": "tcp", 00:30:02.326 "traddr": "10.0.0.2", 00:30:02.326 "adrfam": "ipv4", 00:30:02.326 "trsvcid": "4420", 00:30:02.326 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:02.326 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:02.326 "hdgst": true, 00:30:02.326 "ddgst": true 00:30:02.326 }, 00:30:02.326 "method": "bdev_nvme_attach_controller" 00:30:02.326 }' 00:30:02.326 22:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:02.326 22:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:02.326 22:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:02.326 22:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:02.326 22:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:02.326 22:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:02.326 22:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:02.326 22:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:02.326 22:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:02.326 22:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:02.584 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:02.584 ... 00:30:02.584 fio-3.35 00:30:02.584 Starting 3 threads 00:30:02.584 EAL: No free 2048 kB hugepages reported on node 1 00:30:14.773 00:30:14.773 filename0: (groupid=0, jobs=1): err= 0: pid=2877350: Wed Jul 24 22:17:52 2024 00:30:14.773 read: IOPS=301, BW=37.6MiB/s (39.5MB/s)(378MiB/10045msec) 00:30:14.773 slat (nsec): min=6147, max=58039, avg=12992.30, stdev=4274.90 00:30:14.773 clat (usec): min=4739, max=51286, avg=9935.54, stdev=1660.98 00:30:14.773 lat (usec): min=4748, max=51295, avg=9948.53, stdev=1661.12 00:30:14.773 clat percentiles (usec): 00:30:14.773 | 1.00th=[ 5932], 5.00th=[ 7308], 10.00th=[ 7767], 20.00th=[ 9110], 00:30:14.773 | 30.00th=[ 9634], 40.00th=[10028], 50.00th=[10159], 60.00th=[10421], 00:30:14.773 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11207], 95.00th=[11469], 00:30:14.773 | 99.00th=[12256], 99.50th=[12387], 99.90th=[13042], 99.95th=[47973], 00:30:14.773 | 99.99th=[51119] 00:30:14.773 bw ( KiB/s): min=36352, max=44288, per=36.43%, avg=38681.60, stdev=1810.00, samples=20 00:30:14.773 iops : min= 284, max= 346, avg=302.20, stdev=14.14, samples=20 00:30:14.773 lat (msec) : 10=40.71%, 20=59.23%, 50=0.03%, 100=0.03% 00:30:14.773 cpu : usr=93.06%, sys=6.60%, ctx=22, majf=0, minf=200 00:30:14.773 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:14.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:14.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:14.773 issued rwts: total=3024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:14.773 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:14.773 filename0: (groupid=0, jobs=1): err= 0: pid=2877351: Wed Jul 24 22:17:52 2024 00:30:14.773 read: IOPS=272, BW=34.1MiB/s (35.7MB/s)(341MiB/10001msec) 00:30:14.773 slat (usec): min=6, max=833, avg=16.23, stdev=22.78 00:30:14.773 clat (usec): min=6524, max=56094, avg=10981.56, stdev=5061.19 00:30:14.773 lat (usec): min=6535, max=56377, avg=10997.79, stdev=5066.73 00:30:14.773 clat percentiles (usec): 00:30:14.773 | 1.00th=[ 7177], 5.00th=[ 7832], 10.00th=[ 8717], 20.00th=[ 9765], 00:30:14.773 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10552], 60.00th=[10814], 00:30:14.773 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11731], 95.00th=[12125], 00:30:14.773 | 99.00th=[52691], 99.50th=[54264], 99.90th=[55313], 99.95th=[55313], 00:30:14.773 | 99.99th=[55837] 00:30:14.773 bw ( KiB/s): min=28416, max=38144, per=32.74%, avg=34762.11, stdev=3026.12, samples=19 00:30:14.773 iops : min= 222, max= 298, avg=271.58, stdev=23.64, samples=19 00:30:14.773 lat (msec) : 10=26.22%, 20=72.46%, 50=0.04%, 100=1.28% 00:30:14.773 cpu : usr=91.35%, sys=7.34%, ctx=268, majf=0, minf=130 00:30:14.773 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:14.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:14.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:14.773 issued rwts: total=2727,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:14.773 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:14.773 filename0: (groupid=0, jobs=1): err= 0: pid=2877352: Wed Jul 24 22:17:52 2024 00:30:14.773 read: IOPS=257, BW=32.1MiB/s (33.7MB/s)(323MiB/10046msec) 00:30:14.773 slat (nsec): min=6192, max=74721, avg=13018.27, stdev=4387.08 00:30:14.773 clat (usec): min=6368, max=55147, avg=11640.14, stdev=6525.79 00:30:14.773 lat (usec): min=6381, max=55166, avg=11653.15, stdev=6525.89 00:30:14.773 clat percentiles (usec): 00:30:14.773 | 1.00th=[ 7308], 5.00th=[ 8356], 10.00th=[ 9503], 20.00th=[10028], 00:30:14.773 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10814], 60.00th=[10945], 00:30:14.773 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11994], 95.00th=[12387], 00:30:14.773 | 99.00th=[52691], 99.50th=[53740], 99.90th=[54789], 99.95th=[54789], 00:30:14.773 | 99.99th=[55313] 00:30:14.773 bw ( KiB/s): min=28160, max=37376, per=31.10%, avg=33024.00, stdev=2898.69, samples=20 00:30:14.773 iops : min= 220, max= 292, avg=258.00, stdev=22.65, samples=20 00:30:14.773 lat (msec) : 10=19.79%, 20=77.81%, 50=0.08%, 100=2.32% 00:30:14.773 cpu : usr=93.35%, sys=6.33%, ctx=16, majf=0, minf=284 00:30:14.773 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:14.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:14.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:14.773 issued rwts: total=2582,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:14.773 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:14.773 00:30:14.773 Run status group 0 (all jobs): 00:30:14.773 READ: bw=104MiB/s (109MB/s), 32.1MiB/s-37.6MiB/s (33.7MB/s-39.5MB/s), io=1042MiB (1092MB), run=10001-10046msec 00:30:14.773 22:17:52 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:30:14.773 22:17:52 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:30:14.773 22:17:52 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:30:14.773 22:17:52 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:14.773 22:17:52 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:30:14.773 22:17:52 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:14.773 22:17:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.773 22:17:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:14.773 22:17:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.773 22:17:52 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:14.773 22:17:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.773 22:17:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:14.773 22:17:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.773 00:30:14.773 real 0m11.226s 00:30:14.773 user 0m37.136s 00:30:14.773 sys 0m2.422s 00:30:14.773 22:17:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:14.773 22:17:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:14.773 ************************************ 00:30:14.773 END TEST fio_dif_digest 00:30:14.774 ************************************ 00:30:14.774 22:17:52 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:30:14.774 22:17:52 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:30:14.774 22:17:52 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:14.774 22:17:52 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:30:14.774 22:17:52 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:14.774 22:17:52 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:30:14.774 22:17:52 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:14.774 22:17:52 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:14.774 rmmod nvme_tcp 00:30:14.774 rmmod nvme_fabrics 00:30:14.774 rmmod nvme_keyring 00:30:14.774 22:17:52 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:14.774 22:17:52 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:30:14.774 22:17:52 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:30:14.774 22:17:52 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 2868542 ']' 00:30:14.774 22:17:52 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 2868542 00:30:14.774 22:17:52 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 2868542 ']' 00:30:14.774 22:17:52 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 2868542 00:30:14.774 22:17:52 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:30:14.774 22:17:52 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:14.774 22:17:52 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2868542 00:30:14.774 22:17:52 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:14.774 22:17:52 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:14.774 22:17:52 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2868542' 00:30:14.774 killing process with pid 2868542 00:30:14.774 22:17:52 nvmf_dif -- common/autotest_common.sh@969 -- # kill 2868542 00:30:14.774 22:17:52 nvmf_dif -- common/autotest_common.sh@974 -- # wait 2868542 00:30:14.774 22:17:52 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:30:14.774 22:17:52 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:17.300 Waiting for block devices as requested 00:30:17.300 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:17.300 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:17.300 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:17.300 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:17.300 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:17.556 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:17.557 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:17.557 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:17.557 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:17.813 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:17.813 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:17.813 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:18.071 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:18.071 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:18.071 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:18.328 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:18.328 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:30:18.586 22:17:57 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:18.586 22:17:57 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:18.586 22:17:57 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:18.586 22:17:57 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:18.586 22:17:57 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:18.586 22:17:57 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:18.586 22:17:57 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:20.485 22:17:59 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:20.485 00:30:20.485 real 1m16.416s 00:30:20.485 user 7m13.950s 00:30:20.485 sys 0m30.880s 00:30:20.485 22:17:59 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:20.485 22:17:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:20.485 ************************************ 00:30:20.485 END TEST nvmf_dif 00:30:20.485 ************************************ 00:30:20.743 22:17:59 -- spdk/autotest.sh@297 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:20.743 22:17:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:20.743 22:17:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:20.743 22:17:59 -- common/autotest_common.sh@10 -- # set +x 00:30:20.743 ************************************ 00:30:20.743 START TEST nvmf_abort_qd_sizes 00:30:20.743 ************************************ 00:30:20.743 22:17:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:20.743 * Looking for test storage... 00:30:20.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:20.743 22:17:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:20.743 22:17:59 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:30:20.743 22:17:59 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:20.743 22:17:59 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:20.743 22:17:59 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:20.743 22:17:59 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:20.743 22:17:59 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:20.743 22:17:59 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:20.743 22:17:59 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:20.743 22:17:59 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:20.743 22:17:59 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:20.743 22:17:59 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:20.743 22:17:59 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:30:20.743 22:17:59 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:30:20.743 22:17:59 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:20.743 22:17:59 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:20.743 22:17:59 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:20.743 22:17:59 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:20.743 22:17:59 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:20.743 22:17:59 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:20.743 22:17:59 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:20.743 22:17:59 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:20.743 22:17:59 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.743 22:17:59 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.743 22:17:59 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.743 22:17:59 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:30:20.743 22:17:59 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.743 22:17:59 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:30:20.743 22:17:59 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:20.743 22:17:59 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:20.743 22:17:59 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:20.743 22:17:59 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:20.743 22:17:59 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:20.743 22:17:59 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:20.743 22:17:59 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:20.743 22:17:59 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:20.743 22:17:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:30:20.743 22:17:59 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:20.743 22:17:59 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:20.743 22:17:59 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:20.743 22:17:59 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:20.743 22:17:59 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:20.743 22:17:59 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:20.743 22:17:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:20.744 22:17:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:20.744 22:17:59 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:20.744 22:17:59 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:20.744 22:17:59 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:30:20.744 22:17:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:27.291 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:27.291 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:27.291 Found net devices under 0000:af:00.0: cvl_0_0 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:27.291 Found net devices under 0000:af:00.1: cvl_0_1 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:27.291 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:27.292 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:27.292 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:27.292 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:27.292 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:27.292 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:27.292 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:27.292 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:27.292 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:27.292 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:27.292 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:27.292 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:27.292 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:27.292 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:27.292 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:27.292 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:27.292 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:27.292 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:27.292 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:27.292 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:27.292 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:27.549 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:27.549 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:27.549 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:30:27.549 00:30:27.549 --- 10.0.0.2 ping statistics --- 00:30:27.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:27.549 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:30:27.549 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:27.549 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:27.549 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:30:27.549 00:30:27.549 --- 10.0.0.1 ping statistics --- 00:30:27.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:27.549 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:30:27.549 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:27.549 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:30:27.549 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:30:27.549 22:18:06 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:30.879 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:30.879 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:30.879 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:30.879 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:30.879 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:30.879 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:30.879 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:30.879 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:30.879 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:30.879 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:30.879 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:30.879 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:30.879 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:30.879 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:30.879 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:30.879 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:32.250 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:30:32.250 22:18:11 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:32.250 22:18:11 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:32.250 22:18:11 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:32.250 22:18:11 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:32.250 22:18:11 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:32.250 22:18:11 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:32.250 22:18:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:30:32.250 22:18:11 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:32.250 22:18:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:32.250 22:18:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:32.250 22:18:11 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:30:32.250 22:18:11 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=2886380 00:30:32.250 22:18:11 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 2886380 00:30:32.251 22:18:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 2886380 ']' 00:30:32.251 22:18:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:32.251 22:18:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:32.251 22:18:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:32.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:32.251 22:18:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:32.251 22:18:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:32.251 [2024-07-24 22:18:11.447172] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:30:32.251 [2024-07-24 22:18:11.447227] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:32.508 EAL: No free 2048 kB hugepages reported on node 1 00:30:32.508 [2024-07-24 22:18:11.518733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:32.508 [2024-07-24 22:18:11.593219] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:32.508 [2024-07-24 22:18:11.593258] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:32.508 [2024-07-24 22:18:11.593269] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:32.508 [2024-07-24 22:18:11.593277] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:32.508 [2024-07-24 22:18:11.593284] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:32.508 [2024-07-24 22:18:11.594736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:32.508 [2024-07-24 22:18:11.594757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:32.508 [2024-07-24 22:18:11.594844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:32.508 [2024-07-24 22:18:11.594847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:33.074 22:18:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:33.074 22:18:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:30:33.074 22:18:12 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:33.074 22:18:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:33.074 22:18:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:33.331 22:18:12 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:33.331 22:18:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:30:33.331 22:18:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:30:33.331 22:18:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:30:33.331 22:18:12 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:30:33.331 22:18:12 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:30:33.331 22:18:12 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:d8:00.0 ]] 00:30:33.331 22:18:12 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:30:33.331 22:18:12 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:30:33.331 22:18:12 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:d8:00.0 ]] 00:30:33.331 22:18:12 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:30:33.331 22:18:12 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:30:33.331 22:18:12 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:30:33.331 22:18:12 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:30:33.331 22:18:12 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:d8:00.0 00:30:33.331 22:18:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:30:33.331 22:18:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:d8:00.0 00:30:33.331 22:18:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:30:33.331 22:18:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:33.331 22:18:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:33.331 22:18:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:33.331 ************************************ 00:30:33.331 START TEST spdk_target_abort 00:30:33.331 ************************************ 00:30:33.331 22:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:30:33.331 22:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:30:33.331 22:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:d8:00.0 -b spdk_target 00:30:33.331 22:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.331 22:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:36.666 spdk_targetn1 00:30:36.666 22:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.666 22:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:36.666 22:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:36.666 22:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:36.666 [2024-07-24 22:18:15.209384] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:36.666 22:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.667 22:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:30:36.667 22:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:36.667 22:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:36.667 22:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.667 22:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:30:36.667 22:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:36.667 22:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:36.667 22:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.667 22:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:30:36.667 22:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:36.667 22:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:36.667 [2024-07-24 22:18:15.245671] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:36.667 22:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.667 22:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:30:36.667 22:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:36.667 22:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:36.667 22:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:30:36.667 22:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:36.667 22:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:36.667 22:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:36.667 22:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:36.667 22:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:36.667 22:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:36.667 22:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:36.667 22:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:36.667 22:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:36.667 22:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:36.667 22:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:30:36.667 22:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:36.667 22:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:36.667 22:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:36.667 22:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:36.667 22:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:36.667 22:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:36.667 EAL: No free 2048 kB hugepages reported on node 1 00:30:39.948 Initializing NVMe Controllers 00:30:39.948 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:39.948 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:39.948 Initialization complete. Launching workers. 00:30:39.948 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11485, failed: 0 00:30:39.948 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1642, failed to submit 9843 00:30:39.948 success 897, unsuccess 745, failed 0 00:30:39.948 22:18:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:39.948 22:18:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:39.948 EAL: No free 2048 kB hugepages reported on node 1 00:30:43.227 Initializing NVMe Controllers 00:30:43.227 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:43.227 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:43.227 Initialization complete. Launching workers. 00:30:43.227 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8672, failed: 0 00:30:43.227 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1273, failed to submit 7399 00:30:43.227 success 319, unsuccess 954, failed 0 00:30:43.227 22:18:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:43.227 22:18:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:43.227 EAL: No free 2048 kB hugepages reported on node 1 00:30:45.754 Initializing NVMe Controllers 00:30:45.754 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:45.754 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:45.754 Initialization complete. Launching workers. 00:30:45.754 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 39106, failed: 0 00:30:45.754 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2719, failed to submit 36387 00:30:45.754 success 578, unsuccess 2141, failed 0 00:30:46.011 22:18:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:30:46.011 22:18:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.011 22:18:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:46.011 22:18:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.011 22:18:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:30:46.011 22:18:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.011 22:18:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:47.910 22:18:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.910 22:18:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2886380 00:30:47.910 22:18:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 2886380 ']' 00:30:47.910 22:18:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 2886380 00:30:47.910 22:18:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:30:47.910 22:18:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:47.910 22:18:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2886380 00:30:47.910 22:18:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:47.910 22:18:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:47.910 22:18:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2886380' 00:30:47.910 killing process with pid 2886380 00:30:47.910 22:18:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 2886380 00:30:47.910 22:18:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 2886380 00:30:47.910 00:30:47.910 real 0m14.759s 00:30:47.910 user 0m58.356s 00:30:47.910 sys 0m2.850s 00:30:47.910 22:18:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:47.910 22:18:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:47.910 ************************************ 00:30:47.910 END TEST spdk_target_abort 00:30:47.910 ************************************ 00:30:48.169 22:18:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:30:48.169 22:18:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:48.169 22:18:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:48.169 22:18:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:48.169 ************************************ 00:30:48.169 START TEST kernel_target_abort 00:30:48.169 ************************************ 00:30:48.169 22:18:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:30:48.169 22:18:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:30:48.169 22:18:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:30:48.169 22:18:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:48.169 22:18:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:48.169 22:18:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:48.169 22:18:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:48.169 22:18:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:48.169 22:18:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:48.169 22:18:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:48.169 22:18:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:48.169 22:18:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:48.169 22:18:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:30:48.169 22:18:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:30:48.169 22:18:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:30:48.169 22:18:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:48.169 22:18:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:48.169 22:18:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:48.169 22:18:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:30:48.169 22:18:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:30:48.169 22:18:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:30:48.169 22:18:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:48.169 22:18:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:51.501 Waiting for block devices as requested 00:30:51.501 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:51.501 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:51.501 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:51.502 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:51.502 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:51.502 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:51.760 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:51.760 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:51.760 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:52.017 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:52.017 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:52.017 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:52.275 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:52.275 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:52.275 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:52.532 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:52.532 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:30:52.789 22:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:30:52.789 22:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:52.789 22:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:30:52.789 22:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:30:52.789 22:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:52.789 22:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:30:52.789 22:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:30:52.789 22:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:30:52.789 22:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:30:52.789 No valid GPT data, bailing 00:30:52.789 22:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:52.789 22:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:30:52.789 22:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:30:52.789 22:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:30:52.789 22:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:30:52.789 22:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:52.789 22:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:52.789 22:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:52.789 22:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:30:52.789 22:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:30:52.789 22:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:30:52.789 22:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:30:52.789 22:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:30:52.789 22:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:30:52.789 22:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:30:52.789 22:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:30:52.789 22:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:52.789 22:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.1 -t tcp -s 4420 00:30:52.789 00:30:52.789 Discovery Log Number of Records 2, Generation counter 2 00:30:52.789 =====Discovery Log Entry 0====== 00:30:52.789 trtype: tcp 00:30:52.789 adrfam: ipv4 00:30:52.789 subtype: current discovery subsystem 00:30:52.789 treq: not specified, sq flow control disable supported 00:30:52.789 portid: 1 00:30:52.789 trsvcid: 4420 00:30:52.789 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:52.789 traddr: 10.0.0.1 00:30:52.789 eflags: none 00:30:52.789 sectype: none 00:30:52.789 =====Discovery Log Entry 1====== 00:30:52.789 trtype: tcp 00:30:52.789 adrfam: ipv4 00:30:52.789 subtype: nvme subsystem 00:30:52.789 treq: not specified, sq flow control disable supported 00:30:52.789 portid: 1 00:30:52.789 trsvcid: 4420 00:30:52.789 subnqn: nqn.2016-06.io.spdk:testnqn 00:30:52.789 traddr: 10.0.0.1 00:30:52.789 eflags: none 00:30:52.789 sectype: none 00:30:52.789 22:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:30:52.789 22:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:52.789 22:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:52.789 22:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:30:52.789 22:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:52.789 22:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:52.789 22:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:52.789 22:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:52.789 22:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:52.789 22:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:52.789 22:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:52.789 22:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:52.789 22:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:52.789 22:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:52.789 22:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:30:52.789 22:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:52.789 22:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:30:52.789 22:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:52.789 22:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:52.789 22:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:52.789 22:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:52.789 EAL: No free 2048 kB hugepages reported on node 1 00:30:56.061 Initializing NVMe Controllers 00:30:56.061 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:56.061 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:56.061 Initialization complete. Launching workers. 00:30:56.061 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 73737, failed: 0 00:30:56.061 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 73737, failed to submit 0 00:30:56.061 success 0, unsuccess 73737, failed 0 00:30:56.061 22:18:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:56.061 22:18:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:56.062 EAL: No free 2048 kB hugepages reported on node 1 00:30:59.342 Initializing NVMe Controllers 00:30:59.342 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:59.342 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:59.342 Initialization complete. Launching workers. 00:30:59.342 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 126963, failed: 0 00:30:59.342 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31878, failed to submit 95085 00:30:59.342 success 0, unsuccess 31878, failed 0 00:30:59.342 22:18:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:59.342 22:18:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:59.342 EAL: No free 2048 kB hugepages reported on node 1 00:31:02.613 Initializing NVMe Controllers 00:31:02.613 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:02.613 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:02.613 Initialization complete. Launching workers. 00:31:02.613 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 123150, failed: 0 00:31:02.613 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30778, failed to submit 92372 00:31:02.613 success 0, unsuccess 30778, failed 0 00:31:02.613 22:18:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:31:02.613 22:18:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:02.613 22:18:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:31:02.613 22:18:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:02.613 22:18:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:02.613 22:18:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:02.613 22:18:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:02.614 22:18:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:31:02.614 22:18:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:31:02.614 22:18:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:05.137 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:05.137 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:05.137 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:05.137 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:05.137 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:05.137 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:05.137 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:05.137 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:05.137 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:05.137 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:05.137 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:05.137 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:05.137 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:05.137 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:05.137 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:05.137 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:07.037 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:31:07.037 00:31:07.037 real 0m18.656s 00:31:07.037 user 0m7.616s 00:31:07.037 sys 0m5.880s 00:31:07.037 22:18:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:07.037 22:18:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:07.037 ************************************ 00:31:07.037 END TEST kernel_target_abort 00:31:07.037 ************************************ 00:31:07.037 22:18:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:07.037 22:18:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:31:07.037 22:18:45 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:07.037 22:18:45 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:31:07.037 22:18:45 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:07.037 22:18:45 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:31:07.037 22:18:45 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:07.037 22:18:45 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:07.037 rmmod nvme_tcp 00:31:07.037 rmmod nvme_fabrics 00:31:07.037 rmmod nvme_keyring 00:31:07.037 22:18:45 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:07.037 22:18:45 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:31:07.037 22:18:45 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:31:07.037 22:18:45 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 2886380 ']' 00:31:07.037 22:18:45 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 2886380 00:31:07.037 22:18:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 2886380 ']' 00:31:07.037 22:18:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 2886380 00:31:07.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2886380) - No such process 00:31:07.037 22:18:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 2886380 is not found' 00:31:07.037 Process with pid 2886380 is not found 00:31:07.037 22:18:46 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:31:07.037 22:18:46 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:10.346 Waiting for block devices as requested 00:31:10.346 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:10.346 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:10.346 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:10.346 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:10.346 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:10.346 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:10.610 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:10.610 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:10.610 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:10.610 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:10.868 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:10.868 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:10.868 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:11.126 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:11.126 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:11.126 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:11.385 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:31:11.385 22:18:50 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:11.385 22:18:50 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:11.385 22:18:50 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:11.385 22:18:50 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:11.385 22:18:50 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:11.385 22:18:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:11.385 22:18:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:13.915 22:18:52 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:13.915 00:31:13.915 real 0m52.829s 00:31:13.915 user 1m10.424s 00:31:13.915 sys 0m18.866s 00:31:13.915 22:18:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:13.915 22:18:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:13.915 ************************************ 00:31:13.915 END TEST nvmf_abort_qd_sizes 00:31:13.915 ************************************ 00:31:13.915 22:18:52 -- spdk/autotest.sh@299 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:31:13.915 22:18:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:13.915 22:18:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:13.915 22:18:52 -- common/autotest_common.sh@10 -- # set +x 00:31:13.916 ************************************ 00:31:13.916 START TEST keyring_file 00:31:13.916 ************************************ 00:31:13.916 22:18:52 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:31:13.916 * Looking for test storage... 00:31:13.916 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:31:13.916 22:18:52 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:31:13.916 22:18:52 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:13.916 22:18:52 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:31:13.916 22:18:52 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:13.916 22:18:52 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:13.916 22:18:52 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:13.916 22:18:52 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:13.916 22:18:52 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:13.916 22:18:52 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:13.916 22:18:52 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:13.916 22:18:52 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:13.916 22:18:52 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:13.916 22:18:52 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:13.916 22:18:52 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:31:13.916 22:18:52 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:31:13.916 22:18:52 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:13.916 22:18:52 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:13.916 22:18:52 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:13.916 22:18:52 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:13.916 22:18:52 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:13.916 22:18:52 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:13.916 22:18:52 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:13.916 22:18:52 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:13.916 22:18:52 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.916 22:18:52 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.916 22:18:52 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.916 22:18:52 keyring_file -- paths/export.sh@5 -- # export PATH 00:31:13.916 22:18:52 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.916 22:18:52 keyring_file -- nvmf/common.sh@47 -- # : 0 00:31:13.916 22:18:52 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:13.916 22:18:52 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:13.916 22:18:52 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:13.916 22:18:52 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:13.916 22:18:52 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:13.916 22:18:52 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:13.916 22:18:52 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:13.916 22:18:52 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:13.916 22:18:52 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:31:13.916 22:18:52 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:31:13.916 22:18:52 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:31:13.916 22:18:52 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:31:13.916 22:18:52 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:31:13.916 22:18:52 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:31:13.916 22:18:52 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:31:13.916 22:18:52 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:13.916 22:18:52 keyring_file -- keyring/common.sh@17 -- # name=key0 00:31:13.916 22:18:52 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:13.916 22:18:52 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:13.916 22:18:52 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:13.916 22:18:52 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.YstSA72W7r 00:31:13.916 22:18:52 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:13.916 22:18:52 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:13.916 22:18:52 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:13.916 22:18:52 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:13.916 22:18:52 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:31:13.916 22:18:52 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:13.916 22:18:52 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:13.916 22:18:52 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.YstSA72W7r 00:31:13.916 22:18:52 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.YstSA72W7r 00:31:13.916 22:18:52 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.YstSA72W7r 00:31:13.916 22:18:52 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:31:13.916 22:18:52 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:13.916 22:18:52 keyring_file -- keyring/common.sh@17 -- # name=key1 00:31:13.916 22:18:52 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:31:13.916 22:18:52 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:13.916 22:18:52 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:13.916 22:18:52 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.MetwJjt2XR 00:31:13.916 22:18:52 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:31:13.916 22:18:52 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:31:13.916 22:18:52 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:13.916 22:18:52 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:13.916 22:18:52 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:31:13.916 22:18:52 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:13.916 22:18:52 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:13.916 22:18:52 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.MetwJjt2XR 00:31:13.916 22:18:52 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.MetwJjt2XR 00:31:13.916 22:18:52 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.MetwJjt2XR 00:31:13.916 22:18:52 keyring_file -- keyring/file.sh@30 -- # tgtpid=2895778 00:31:13.916 22:18:52 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2895778 00:31:13.916 22:18:52 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2895778 ']' 00:31:13.916 22:18:52 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:13.916 22:18:52 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:13.916 22:18:52 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:13.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:13.916 22:18:52 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:13.916 22:18:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:13.916 22:18:52 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:31:13.916 [2024-07-24 22:18:52.955626] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:31:13.916 [2024-07-24 22:18:52.955679] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2895778 ] 00:31:13.916 EAL: No free 2048 kB hugepages reported on node 1 00:31:13.916 [2024-07-24 22:18:53.025303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:13.916 [2024-07-24 22:18:53.098379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:14.849 22:18:53 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:14.849 22:18:53 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:31:14.849 22:18:53 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:31:14.849 22:18:53 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.849 22:18:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:14.849 [2024-07-24 22:18:53.740436] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:14.849 null0 00:31:14.849 [2024-07-24 22:18:53.772492] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:14.849 [2024-07-24 22:18:53.772787] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:14.849 [2024-07-24 22:18:53.780507] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:31:14.850 22:18:53 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.850 22:18:53 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:14.850 22:18:53 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:31:14.850 22:18:53 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:14.850 22:18:53 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:14.850 22:18:53 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:14.850 22:18:53 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:14.850 22:18:53 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:14.850 22:18:53 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:14.850 22:18:53 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.850 22:18:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:14.850 [2024-07-24 22:18:53.788527] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:31:14.850 request: 00:31:14.850 { 00:31:14.850 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:31:14.850 "secure_channel": false, 00:31:14.850 "listen_address": { 00:31:14.850 "trtype": "tcp", 00:31:14.850 "traddr": "127.0.0.1", 00:31:14.850 "trsvcid": "4420" 00:31:14.850 }, 00:31:14.850 "method": "nvmf_subsystem_add_listener", 00:31:14.850 "req_id": 1 00:31:14.850 } 00:31:14.850 Got JSON-RPC error response 00:31:14.850 response: 00:31:14.850 { 00:31:14.850 "code": -32602, 00:31:14.850 "message": "Invalid parameters" 00:31:14.850 } 00:31:14.850 22:18:53 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:14.850 22:18:53 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:31:14.850 22:18:53 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:14.850 22:18:53 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:14.850 22:18:53 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:14.850 22:18:53 keyring_file -- keyring/file.sh@46 -- # bperfpid=2895795 00:31:14.850 22:18:53 keyring_file -- keyring/file.sh@48 -- # waitforlisten 2895795 /var/tmp/bperf.sock 00:31:14.850 22:18:53 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2895795 ']' 00:31:14.850 22:18:53 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:14.850 22:18:53 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:14.850 22:18:53 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:14.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:14.850 22:18:53 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:14.850 22:18:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:14.850 22:18:53 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:31:14.850 [2024-07-24 22:18:53.839822] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:31:14.850 [2024-07-24 22:18:53.839869] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2895795 ] 00:31:14.850 EAL: No free 2048 kB hugepages reported on node 1 00:31:14.850 [2024-07-24 22:18:53.909262] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:14.850 [2024-07-24 22:18:53.977993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:15.782 22:18:54 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:15.782 22:18:54 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:31:15.782 22:18:54 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.YstSA72W7r 00:31:15.782 22:18:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.YstSA72W7r 00:31:15.782 22:18:54 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.MetwJjt2XR 00:31:15.782 22:18:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.MetwJjt2XR 00:31:15.782 22:18:54 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:31:15.782 22:18:54 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:31:15.782 22:18:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:15.782 22:18:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:15.782 22:18:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:16.040 22:18:55 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.YstSA72W7r == \/\t\m\p\/\t\m\p\.\Y\s\t\S\A\7\2\W\7\r ]] 00:31:16.040 22:18:55 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:31:16.040 22:18:55 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:31:16.040 22:18:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:16.040 22:18:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:16.040 22:18:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:16.297 22:18:55 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.MetwJjt2XR == \/\t\m\p\/\t\m\p\.\M\e\t\w\J\j\t\2\X\R ]] 00:31:16.297 22:18:55 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:31:16.297 22:18:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:16.297 22:18:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:16.297 22:18:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:16.297 22:18:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:16.297 22:18:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:16.555 22:18:55 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:31:16.555 22:18:55 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:31:16.555 22:18:55 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:16.555 22:18:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:16.555 22:18:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:16.555 22:18:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:16.555 22:18:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:16.555 22:18:55 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:31:16.555 22:18:55 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:16.555 22:18:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:16.813 [2024-07-24 22:18:55.871508] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:16.813 nvme0n1 00:31:16.813 22:18:55 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:31:16.813 22:18:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:16.813 22:18:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:16.813 22:18:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:16.813 22:18:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:16.813 22:18:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:17.071 22:18:56 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:31:17.071 22:18:56 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:31:17.071 22:18:56 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:17.071 22:18:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:17.071 22:18:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:17.071 22:18:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:17.071 22:18:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:17.330 22:18:56 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:31:17.330 22:18:56 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:17.330 Running I/O for 1 seconds... 00:31:18.265 00:31:18.265 Latency(us) 00:31:18.265 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:18.265 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:31:18.265 nvme0n1 : 1.01 12370.36 48.32 0.00 0.00 10296.55 2582.12 11586.76 00:31:18.265 =================================================================================================================== 00:31:18.265 Total : 12370.36 48.32 0.00 0.00 10296.55 2582.12 11586.76 00:31:18.265 0 00:31:18.265 22:18:57 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:18.265 22:18:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:18.522 22:18:57 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:31:18.522 22:18:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:18.522 22:18:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:18.522 22:18:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:18.522 22:18:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:18.522 22:18:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:18.780 22:18:57 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:31:18.780 22:18:57 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:31:18.780 22:18:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:18.780 22:18:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:18.780 22:18:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:18.780 22:18:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:18.780 22:18:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:18.780 22:18:57 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:31:18.780 22:18:57 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:18.780 22:18:57 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:31:18.780 22:18:57 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:18.780 22:18:57 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:31:18.780 22:18:57 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:18.780 22:18:57 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:31:18.780 22:18:57 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:18.780 22:18:57 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:18.780 22:18:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:19.037 [2024-07-24 22:18:58.107060] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:19.037 [2024-07-24 22:18:58.107220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f6840 (107): Transport endpoint is not connected 00:31:19.037 [2024-07-24 22:18:58.108215] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f6840 (9): Bad file descriptor 00:31:19.038 [2024-07-24 22:18:58.109216] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:19.038 [2024-07-24 22:18:58.109228] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:31:19.038 [2024-07-24 22:18:58.109237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:19.038 request: 00:31:19.038 { 00:31:19.038 "name": "nvme0", 00:31:19.038 "trtype": "tcp", 00:31:19.038 "traddr": "127.0.0.1", 00:31:19.038 "adrfam": "ipv4", 00:31:19.038 "trsvcid": "4420", 00:31:19.038 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:19.038 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:19.038 "prchk_reftag": false, 00:31:19.038 "prchk_guard": false, 00:31:19.038 "hdgst": false, 00:31:19.038 "ddgst": false, 00:31:19.038 "psk": "key1", 00:31:19.038 "method": "bdev_nvme_attach_controller", 00:31:19.038 "req_id": 1 00:31:19.038 } 00:31:19.038 Got JSON-RPC error response 00:31:19.038 response: 00:31:19.038 { 00:31:19.038 "code": -5, 00:31:19.038 "message": "Input/output error" 00:31:19.038 } 00:31:19.038 22:18:58 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:31:19.038 22:18:58 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:19.038 22:18:58 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:19.038 22:18:58 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:19.038 22:18:58 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:31:19.038 22:18:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:19.038 22:18:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:19.038 22:18:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:19.038 22:18:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:19.038 22:18:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:19.295 22:18:58 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:31:19.295 22:18:58 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:31:19.295 22:18:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:19.295 22:18:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:19.295 22:18:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:19.295 22:18:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:19.295 22:18:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:19.295 22:18:58 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:31:19.295 22:18:58 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:31:19.295 22:18:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:19.552 22:18:58 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:31:19.552 22:18:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:31:19.809 22:18:58 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:31:19.809 22:18:58 keyring_file -- keyring/file.sh@77 -- # jq length 00:31:19.809 22:18:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:19.809 22:18:59 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:31:19.809 22:18:59 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.YstSA72W7r 00:31:19.809 22:18:59 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.YstSA72W7r 00:31:19.809 22:18:59 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:31:19.809 22:18:59 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.YstSA72W7r 00:31:19.809 22:18:59 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:31:19.809 22:18:59 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:19.809 22:18:59 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:31:19.809 22:18:59 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:19.809 22:18:59 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.YstSA72W7r 00:31:19.809 22:18:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.YstSA72W7r 00:31:20.066 [2024-07-24 22:18:59.158271] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.YstSA72W7r': 0100660 00:31:20.066 [2024-07-24 22:18:59.158298] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:31:20.066 request: 00:31:20.066 { 00:31:20.066 "name": "key0", 00:31:20.066 "path": "/tmp/tmp.YstSA72W7r", 00:31:20.066 "method": "keyring_file_add_key", 00:31:20.066 "req_id": 1 00:31:20.066 } 00:31:20.066 Got JSON-RPC error response 00:31:20.066 response: 00:31:20.066 { 00:31:20.066 "code": -1, 00:31:20.066 "message": "Operation not permitted" 00:31:20.066 } 00:31:20.066 22:18:59 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:31:20.066 22:18:59 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:20.066 22:18:59 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:20.066 22:18:59 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:20.066 22:18:59 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.YstSA72W7r 00:31:20.066 22:18:59 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.YstSA72W7r 00:31:20.066 22:18:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.YstSA72W7r 00:31:20.324 22:18:59 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.YstSA72W7r 00:31:20.324 22:18:59 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:31:20.324 22:18:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:20.324 22:18:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:20.324 22:18:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:20.324 22:18:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:20.324 22:18:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:20.324 22:18:59 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:31:20.324 22:18:59 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:20.324 22:18:59 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:31:20.324 22:18:59 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:20.324 22:18:59 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:31:20.324 22:18:59 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:20.324 22:18:59 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:31:20.582 22:18:59 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:20.582 22:18:59 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:20.582 22:18:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:20.582 [2024-07-24 22:18:59.691687] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.YstSA72W7r': No such file or directory 00:31:20.582 [2024-07-24 22:18:59.691719] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:31:20.582 [2024-07-24 22:18:59.691741] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:31:20.582 [2024-07-24 22:18:59.691749] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:20.582 [2024-07-24 22:18:59.691757] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:31:20.582 request: 00:31:20.582 { 00:31:20.582 "name": "nvme0", 00:31:20.582 "trtype": "tcp", 00:31:20.582 "traddr": "127.0.0.1", 00:31:20.582 "adrfam": "ipv4", 00:31:20.582 "trsvcid": "4420", 00:31:20.582 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:20.582 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:20.582 "prchk_reftag": false, 00:31:20.582 "prchk_guard": false, 00:31:20.582 "hdgst": false, 00:31:20.582 "ddgst": false, 00:31:20.582 "psk": "key0", 00:31:20.582 "method": "bdev_nvme_attach_controller", 00:31:20.582 "req_id": 1 00:31:20.582 } 00:31:20.582 Got JSON-RPC error response 00:31:20.582 response: 00:31:20.582 { 00:31:20.582 "code": -19, 00:31:20.582 "message": "No such device" 00:31:20.582 } 00:31:20.582 22:18:59 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:31:20.582 22:18:59 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:20.582 22:18:59 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:20.582 22:18:59 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:20.582 22:18:59 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:31:20.582 22:18:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:20.840 22:18:59 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:31:20.840 22:18:59 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:20.840 22:18:59 keyring_file -- keyring/common.sh@17 -- # name=key0 00:31:20.840 22:18:59 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:20.840 22:18:59 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:20.840 22:18:59 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:20.840 22:18:59 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.NAUqTzeots 00:31:20.840 22:18:59 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:20.840 22:18:59 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:20.840 22:18:59 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:20.840 22:18:59 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:20.840 22:18:59 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:31:20.840 22:18:59 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:20.840 22:18:59 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:20.840 22:18:59 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.NAUqTzeots 00:31:20.840 22:18:59 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.NAUqTzeots 00:31:20.840 22:18:59 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.NAUqTzeots 00:31:20.840 22:18:59 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.NAUqTzeots 00:31:20.840 22:18:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.NAUqTzeots 00:31:21.098 22:19:00 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:21.098 22:19:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:21.356 nvme0n1 00:31:21.356 22:19:00 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:31:21.356 22:19:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:21.356 22:19:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:21.356 22:19:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:21.356 22:19:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:21.356 22:19:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:21.356 22:19:00 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:31:21.356 22:19:00 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:31:21.356 22:19:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:21.614 22:19:00 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:31:21.614 22:19:00 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:31:21.614 22:19:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:21.614 22:19:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:21.614 22:19:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:21.871 22:19:00 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:31:21.871 22:19:00 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:31:21.871 22:19:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:21.871 22:19:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:21.871 22:19:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:21.871 22:19:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:21.871 22:19:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:21.871 22:19:01 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:31:21.871 22:19:01 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:21.871 22:19:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:22.128 22:19:01 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:31:22.128 22:19:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:22.128 22:19:01 keyring_file -- keyring/file.sh@104 -- # jq length 00:31:22.385 22:19:01 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:31:22.385 22:19:01 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.NAUqTzeots 00:31:22.385 22:19:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.NAUqTzeots 00:31:22.642 22:19:01 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.MetwJjt2XR 00:31:22.642 22:19:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.MetwJjt2XR 00:31:22.642 22:19:01 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:22.642 22:19:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:22.900 nvme0n1 00:31:22.900 22:19:02 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:31:22.900 22:19:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:31:23.158 22:19:02 keyring_file -- keyring/file.sh@112 -- # config='{ 00:31:23.158 "subsystems": [ 00:31:23.158 { 00:31:23.158 "subsystem": "keyring", 00:31:23.158 "config": [ 00:31:23.158 { 00:31:23.158 "method": "keyring_file_add_key", 00:31:23.158 "params": { 00:31:23.158 "name": "key0", 00:31:23.158 "path": "/tmp/tmp.NAUqTzeots" 00:31:23.158 } 00:31:23.158 }, 00:31:23.158 { 00:31:23.158 "method": "keyring_file_add_key", 00:31:23.158 "params": { 00:31:23.158 "name": "key1", 00:31:23.158 "path": "/tmp/tmp.MetwJjt2XR" 00:31:23.158 } 00:31:23.158 } 00:31:23.158 ] 00:31:23.158 }, 00:31:23.158 { 00:31:23.158 "subsystem": "iobuf", 00:31:23.158 "config": [ 00:31:23.158 { 00:31:23.158 "method": "iobuf_set_options", 00:31:23.158 "params": { 00:31:23.158 "small_pool_count": 8192, 00:31:23.158 "large_pool_count": 1024, 00:31:23.158 "small_bufsize": 8192, 00:31:23.158 "large_bufsize": 135168 00:31:23.158 } 00:31:23.158 } 00:31:23.158 ] 00:31:23.159 }, 00:31:23.159 { 00:31:23.159 "subsystem": "sock", 00:31:23.159 "config": [ 00:31:23.159 { 00:31:23.159 "method": "sock_set_default_impl", 00:31:23.159 "params": { 00:31:23.159 "impl_name": "posix" 00:31:23.159 } 00:31:23.159 }, 00:31:23.159 { 00:31:23.159 "method": "sock_impl_set_options", 00:31:23.159 "params": { 00:31:23.159 "impl_name": "ssl", 00:31:23.159 "recv_buf_size": 4096, 00:31:23.159 "send_buf_size": 4096, 00:31:23.159 "enable_recv_pipe": true, 00:31:23.159 "enable_quickack": false, 00:31:23.159 "enable_placement_id": 0, 00:31:23.159 "enable_zerocopy_send_server": true, 00:31:23.159 "enable_zerocopy_send_client": false, 00:31:23.159 "zerocopy_threshold": 0, 00:31:23.159 "tls_version": 0, 00:31:23.159 "enable_ktls": false 00:31:23.159 } 00:31:23.159 }, 00:31:23.159 { 00:31:23.159 "method": "sock_impl_set_options", 00:31:23.159 "params": { 00:31:23.159 "impl_name": "posix", 00:31:23.159 "recv_buf_size": 2097152, 00:31:23.159 "send_buf_size": 2097152, 00:31:23.159 "enable_recv_pipe": true, 00:31:23.159 "enable_quickack": false, 00:31:23.159 "enable_placement_id": 0, 00:31:23.159 "enable_zerocopy_send_server": true, 00:31:23.159 "enable_zerocopy_send_client": false, 00:31:23.159 "zerocopy_threshold": 0, 00:31:23.159 "tls_version": 0, 00:31:23.159 "enable_ktls": false 00:31:23.159 } 00:31:23.159 } 00:31:23.159 ] 00:31:23.159 }, 00:31:23.159 { 00:31:23.159 "subsystem": "vmd", 00:31:23.159 "config": [] 00:31:23.159 }, 00:31:23.159 { 00:31:23.159 "subsystem": "accel", 00:31:23.159 "config": [ 00:31:23.159 { 00:31:23.159 "method": "accel_set_options", 00:31:23.159 "params": { 00:31:23.159 "small_cache_size": 128, 00:31:23.159 "large_cache_size": 16, 00:31:23.159 "task_count": 2048, 00:31:23.159 "sequence_count": 2048, 00:31:23.159 "buf_count": 2048 00:31:23.159 } 00:31:23.159 } 00:31:23.159 ] 00:31:23.159 }, 00:31:23.159 { 00:31:23.159 "subsystem": "bdev", 00:31:23.159 "config": [ 00:31:23.159 { 00:31:23.159 "method": "bdev_set_options", 00:31:23.159 "params": { 00:31:23.159 "bdev_io_pool_size": 65535, 00:31:23.159 "bdev_io_cache_size": 256, 00:31:23.159 "bdev_auto_examine": true, 00:31:23.159 "iobuf_small_cache_size": 128, 00:31:23.159 "iobuf_large_cache_size": 16 00:31:23.159 } 00:31:23.159 }, 00:31:23.159 { 00:31:23.159 "method": "bdev_raid_set_options", 00:31:23.159 "params": { 00:31:23.159 "process_window_size_kb": 1024, 00:31:23.159 "process_max_bandwidth_mb_sec": 0 00:31:23.159 } 00:31:23.159 }, 00:31:23.159 { 00:31:23.159 "method": "bdev_iscsi_set_options", 00:31:23.159 "params": { 00:31:23.159 "timeout_sec": 30 00:31:23.159 } 00:31:23.159 }, 00:31:23.159 { 00:31:23.159 "method": "bdev_nvme_set_options", 00:31:23.159 "params": { 00:31:23.159 "action_on_timeout": "none", 00:31:23.159 "timeout_us": 0, 00:31:23.159 "timeout_admin_us": 0, 00:31:23.159 "keep_alive_timeout_ms": 10000, 00:31:23.159 "arbitration_burst": 0, 00:31:23.159 "low_priority_weight": 0, 00:31:23.159 "medium_priority_weight": 0, 00:31:23.159 "high_priority_weight": 0, 00:31:23.159 "nvme_adminq_poll_period_us": 10000, 00:31:23.159 "nvme_ioq_poll_period_us": 0, 00:31:23.159 "io_queue_requests": 512, 00:31:23.159 "delay_cmd_submit": true, 00:31:23.159 "transport_retry_count": 4, 00:31:23.159 "bdev_retry_count": 3, 00:31:23.159 "transport_ack_timeout": 0, 00:31:23.159 "ctrlr_loss_timeout_sec": 0, 00:31:23.159 "reconnect_delay_sec": 0, 00:31:23.159 "fast_io_fail_timeout_sec": 0, 00:31:23.159 "disable_auto_failback": false, 00:31:23.159 "generate_uuids": false, 00:31:23.159 "transport_tos": 0, 00:31:23.159 "nvme_error_stat": false, 00:31:23.159 "rdma_srq_size": 0, 00:31:23.159 "io_path_stat": false, 00:31:23.159 "allow_accel_sequence": false, 00:31:23.159 "rdma_max_cq_size": 0, 00:31:23.159 "rdma_cm_event_timeout_ms": 0, 00:31:23.159 "dhchap_digests": [ 00:31:23.159 "sha256", 00:31:23.159 "sha384", 00:31:23.159 "sha512" 00:31:23.159 ], 00:31:23.159 "dhchap_dhgroups": [ 00:31:23.159 "null", 00:31:23.159 "ffdhe2048", 00:31:23.159 "ffdhe3072", 00:31:23.159 "ffdhe4096", 00:31:23.159 "ffdhe6144", 00:31:23.159 "ffdhe8192" 00:31:23.159 ] 00:31:23.159 } 00:31:23.159 }, 00:31:23.159 { 00:31:23.159 "method": "bdev_nvme_attach_controller", 00:31:23.159 "params": { 00:31:23.159 "name": "nvme0", 00:31:23.159 "trtype": "TCP", 00:31:23.159 "adrfam": "IPv4", 00:31:23.159 "traddr": "127.0.0.1", 00:31:23.159 "trsvcid": "4420", 00:31:23.159 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:23.159 "prchk_reftag": false, 00:31:23.159 "prchk_guard": false, 00:31:23.159 "ctrlr_loss_timeout_sec": 0, 00:31:23.159 "reconnect_delay_sec": 0, 00:31:23.159 "fast_io_fail_timeout_sec": 0, 00:31:23.159 "psk": "key0", 00:31:23.159 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:23.159 "hdgst": false, 00:31:23.159 "ddgst": false 00:31:23.159 } 00:31:23.159 }, 00:31:23.159 { 00:31:23.159 "method": "bdev_nvme_set_hotplug", 00:31:23.159 "params": { 00:31:23.159 "period_us": 100000, 00:31:23.159 "enable": false 00:31:23.159 } 00:31:23.159 }, 00:31:23.159 { 00:31:23.159 "method": "bdev_wait_for_examine" 00:31:23.159 } 00:31:23.159 ] 00:31:23.159 }, 00:31:23.159 { 00:31:23.159 "subsystem": "nbd", 00:31:23.159 "config": [] 00:31:23.159 } 00:31:23.159 ] 00:31:23.159 }' 00:31:23.159 22:19:02 keyring_file -- keyring/file.sh@114 -- # killprocess 2895795 00:31:23.159 22:19:02 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2895795 ']' 00:31:23.159 22:19:02 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2895795 00:31:23.159 22:19:02 keyring_file -- common/autotest_common.sh@955 -- # uname 00:31:23.159 22:19:02 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:23.159 22:19:02 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2895795 00:31:23.159 22:19:02 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:23.159 22:19:02 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:23.159 22:19:02 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2895795' 00:31:23.159 killing process with pid 2895795 00:31:23.159 22:19:02 keyring_file -- common/autotest_common.sh@969 -- # kill 2895795 00:31:23.159 Received shutdown signal, test time was about 1.000000 seconds 00:31:23.159 00:31:23.159 Latency(us) 00:31:23.159 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:23.159 =================================================================================================================== 00:31:23.159 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:23.160 22:19:02 keyring_file -- common/autotest_common.sh@974 -- # wait 2895795 00:31:23.418 22:19:02 keyring_file -- keyring/file.sh@117 -- # bperfpid=2897451 00:31:23.418 22:19:02 keyring_file -- keyring/file.sh@119 -- # waitforlisten 2897451 /var/tmp/bperf.sock 00:31:23.418 22:19:02 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2897451 ']' 00:31:23.418 22:19:02 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:23.418 22:19:02 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:23.418 22:19:02 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:31:23.418 22:19:02 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:23.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:23.418 22:19:02 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:23.418 22:19:02 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:31:23.418 "subsystems": [ 00:31:23.418 { 00:31:23.418 "subsystem": "keyring", 00:31:23.418 "config": [ 00:31:23.418 { 00:31:23.418 "method": "keyring_file_add_key", 00:31:23.418 "params": { 00:31:23.418 "name": "key0", 00:31:23.418 "path": "/tmp/tmp.NAUqTzeots" 00:31:23.418 } 00:31:23.418 }, 00:31:23.418 { 00:31:23.418 "method": "keyring_file_add_key", 00:31:23.418 "params": { 00:31:23.418 "name": "key1", 00:31:23.418 "path": "/tmp/tmp.MetwJjt2XR" 00:31:23.418 } 00:31:23.418 } 00:31:23.418 ] 00:31:23.418 }, 00:31:23.418 { 00:31:23.418 "subsystem": "iobuf", 00:31:23.418 "config": [ 00:31:23.418 { 00:31:23.418 "method": "iobuf_set_options", 00:31:23.418 "params": { 00:31:23.418 "small_pool_count": 8192, 00:31:23.418 "large_pool_count": 1024, 00:31:23.418 "small_bufsize": 8192, 00:31:23.418 "large_bufsize": 135168 00:31:23.418 } 00:31:23.418 } 00:31:23.418 ] 00:31:23.418 }, 00:31:23.418 { 00:31:23.418 "subsystem": "sock", 00:31:23.418 "config": [ 00:31:23.418 { 00:31:23.418 "method": "sock_set_default_impl", 00:31:23.418 "params": { 00:31:23.418 "impl_name": "posix" 00:31:23.418 } 00:31:23.418 }, 00:31:23.418 { 00:31:23.418 "method": "sock_impl_set_options", 00:31:23.418 "params": { 00:31:23.418 "impl_name": "ssl", 00:31:23.418 "recv_buf_size": 4096, 00:31:23.418 "send_buf_size": 4096, 00:31:23.418 "enable_recv_pipe": true, 00:31:23.418 "enable_quickack": false, 00:31:23.418 "enable_placement_id": 0, 00:31:23.419 "enable_zerocopy_send_server": true, 00:31:23.419 "enable_zerocopy_send_client": false, 00:31:23.419 "zerocopy_threshold": 0, 00:31:23.419 "tls_version": 0, 00:31:23.419 "enable_ktls": false 00:31:23.419 } 00:31:23.419 }, 00:31:23.419 { 00:31:23.419 "method": "sock_impl_set_options", 00:31:23.419 "params": { 00:31:23.419 "impl_name": "posix", 00:31:23.419 "recv_buf_size": 2097152, 00:31:23.419 "send_buf_size": 2097152, 00:31:23.419 "enable_recv_pipe": true, 00:31:23.419 "enable_quickack": false, 00:31:23.419 "enable_placement_id": 0, 00:31:23.419 "enable_zerocopy_send_server": true, 00:31:23.419 "enable_zerocopy_send_client": false, 00:31:23.419 "zerocopy_threshold": 0, 00:31:23.419 "tls_version": 0, 00:31:23.419 "enable_ktls": false 00:31:23.419 } 00:31:23.419 } 00:31:23.419 ] 00:31:23.419 }, 00:31:23.419 { 00:31:23.419 "subsystem": "vmd", 00:31:23.419 "config": [] 00:31:23.419 }, 00:31:23.419 { 00:31:23.419 "subsystem": "accel", 00:31:23.419 "config": [ 00:31:23.419 { 00:31:23.419 "method": "accel_set_options", 00:31:23.419 "params": { 00:31:23.419 "small_cache_size": 128, 00:31:23.419 "large_cache_size": 16, 00:31:23.419 "task_count": 2048, 00:31:23.419 "sequence_count": 2048, 00:31:23.419 "buf_count": 2048 00:31:23.419 } 00:31:23.419 } 00:31:23.419 ] 00:31:23.419 }, 00:31:23.419 { 00:31:23.419 "subsystem": "bdev", 00:31:23.419 "config": [ 00:31:23.419 { 00:31:23.419 "method": "bdev_set_options", 00:31:23.419 "params": { 00:31:23.419 "bdev_io_pool_size": 65535, 00:31:23.419 "bdev_io_cache_size": 256, 00:31:23.419 "bdev_auto_examine": true, 00:31:23.419 "iobuf_small_cache_size": 128, 00:31:23.419 "iobuf_large_cache_size": 16 00:31:23.419 } 00:31:23.419 }, 00:31:23.419 { 00:31:23.419 "method": "bdev_raid_set_options", 00:31:23.419 "params": { 00:31:23.419 "process_window_size_kb": 1024, 00:31:23.419 "process_max_bandwidth_mb_sec": 0 00:31:23.419 } 00:31:23.419 }, 00:31:23.419 { 00:31:23.419 "method": "bdev_iscsi_set_options", 00:31:23.419 "params": { 00:31:23.419 "timeout_sec": 30 00:31:23.419 } 00:31:23.419 }, 00:31:23.419 { 00:31:23.419 "method": "bdev_nvme_set_options", 00:31:23.419 "params": { 00:31:23.419 "action_on_timeout": "none", 00:31:23.419 "timeout_us": 0, 00:31:23.419 "timeout_admin_us": 0, 00:31:23.419 "keep_alive_timeout_ms": 10000, 00:31:23.419 "arbitration_burst": 0, 00:31:23.419 "low_priority_weight": 0, 00:31:23.419 "medium_priority_weight": 0, 00:31:23.419 "high_priority_weight": 0, 00:31:23.419 "nvme_adminq_poll_period_us": 10000, 00:31:23.419 "nvme_ioq_poll_period_us": 0, 00:31:23.419 "io_queue_requests": 512, 00:31:23.419 "delay_cmd_submit": true, 00:31:23.419 "transport_retry_count": 4, 00:31:23.419 "bdev_retry_count": 3, 00:31:23.419 "transport_ack_timeout": 0, 00:31:23.419 "ctrlr_loss_timeout_sec": 0, 00:31:23.419 "reconnect_delay_sec": 0, 00:31:23.419 "fast_io_fail_timeout_sec": 0, 00:31:23.419 "disable_auto_failback": false, 00:31:23.419 "generate_uuids": false, 00:31:23.419 "transport_tos": 0, 00:31:23.419 "nvme_error_stat": false, 00:31:23.419 "rdma_srq_size": 0, 00:31:23.419 "io_path_stat": false, 00:31:23.419 "allow_accel_sequence": false, 00:31:23.419 "rdma_max_cq_size": 0, 00:31:23.419 "rdma_cm_event_timeout_ms": 0, 00:31:23.419 "dhchap_digests": [ 00:31:23.419 "sha256", 00:31:23.419 "sha384", 00:31:23.419 "sha512" 00:31:23.419 ], 00:31:23.419 "dhchap_dhgroups": [ 00:31:23.419 "null", 00:31:23.419 "ffdhe2048", 00:31:23.419 "ffdhe3072", 00:31:23.419 "ffdhe4096", 00:31:23.419 "ffdhe6144", 00:31:23.419 "ffdhe8192" 00:31:23.419 ] 00:31:23.419 } 00:31:23.419 }, 00:31:23.419 { 00:31:23.419 "method": "bdev_nvme_attach_controller", 00:31:23.419 "params": { 00:31:23.419 "name": "nvme0", 00:31:23.419 "trtype": "TCP", 00:31:23.419 "adrfam": "IPv4", 00:31:23.419 "traddr": "127.0.0.1", 00:31:23.419 "trsvcid": "4420", 00:31:23.419 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:23.419 "prchk_reftag": false, 00:31:23.419 "prchk_guard": false, 00:31:23.419 "ctrlr_loss_timeout_sec": 0, 00:31:23.419 "reconnect_delay_sec": 0, 00:31:23.419 "fast_io_fail_timeout_sec": 0, 00:31:23.419 "psk": "key0", 00:31:23.419 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:23.419 "hdgst": false, 00:31:23.419 "ddgst": false 00:31:23.419 } 00:31:23.419 }, 00:31:23.419 { 00:31:23.419 "method": "bdev_nvme_set_hotplug", 00:31:23.419 "params": { 00:31:23.419 "period_us": 100000, 00:31:23.419 "enable": false 00:31:23.419 } 00:31:23.419 }, 00:31:23.419 { 00:31:23.419 "method": "bdev_wait_for_examine" 00:31:23.419 } 00:31:23.419 ] 00:31:23.419 }, 00:31:23.419 { 00:31:23.419 "subsystem": "nbd", 00:31:23.419 "config": [] 00:31:23.419 } 00:31:23.419 ] 00:31:23.419 }' 00:31:23.419 22:19:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:23.419 [2024-07-24 22:19:02.522454] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:31:23.419 [2024-07-24 22:19:02.522506] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2897451 ] 00:31:23.419 EAL: No free 2048 kB hugepages reported on node 1 00:31:23.419 [2024-07-24 22:19:02.590661] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:23.678 [2024-07-24 22:19:02.658930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:23.678 [2024-07-24 22:19:02.817698] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:24.245 22:19:03 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:24.245 22:19:03 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:31:24.245 22:19:03 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:31:24.245 22:19:03 keyring_file -- keyring/file.sh@120 -- # jq length 00:31:24.245 22:19:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:24.512 22:19:03 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:31:24.512 22:19:03 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:31:24.512 22:19:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:24.512 22:19:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:24.512 22:19:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:24.512 22:19:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:24.512 22:19:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:24.512 22:19:03 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:31:24.512 22:19:03 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:31:24.512 22:19:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:24.512 22:19:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:24.512 22:19:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:24.512 22:19:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:24.512 22:19:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:24.774 22:19:03 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:31:24.774 22:19:03 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:31:24.774 22:19:03 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:31:24.774 22:19:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:31:25.032 22:19:04 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:31:25.032 22:19:04 keyring_file -- keyring/file.sh@1 -- # cleanup 00:31:25.032 22:19:04 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.NAUqTzeots /tmp/tmp.MetwJjt2XR 00:31:25.032 22:19:04 keyring_file -- keyring/file.sh@20 -- # killprocess 2897451 00:31:25.032 22:19:04 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2897451 ']' 00:31:25.032 22:19:04 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2897451 00:31:25.032 22:19:04 keyring_file -- common/autotest_common.sh@955 -- # uname 00:31:25.032 22:19:04 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:25.032 22:19:04 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2897451 00:31:25.032 22:19:04 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:25.032 22:19:04 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:25.032 22:19:04 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2897451' 00:31:25.032 killing process with pid 2897451 00:31:25.032 22:19:04 keyring_file -- common/autotest_common.sh@969 -- # kill 2897451 00:31:25.032 Received shutdown signal, test time was about 1.000000 seconds 00:31:25.032 00:31:25.032 Latency(us) 00:31:25.032 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:25.032 =================================================================================================================== 00:31:25.032 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:25.032 22:19:04 keyring_file -- common/autotest_common.sh@974 -- # wait 2897451 00:31:25.290 22:19:04 keyring_file -- keyring/file.sh@21 -- # killprocess 2895778 00:31:25.290 22:19:04 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2895778 ']' 00:31:25.290 22:19:04 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2895778 00:31:25.290 22:19:04 keyring_file -- common/autotest_common.sh@955 -- # uname 00:31:25.290 22:19:04 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:25.290 22:19:04 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2895778 00:31:25.290 22:19:04 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:25.290 22:19:04 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:25.290 22:19:04 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2895778' 00:31:25.290 killing process with pid 2895778 00:31:25.290 22:19:04 keyring_file -- common/autotest_common.sh@969 -- # kill 2895778 00:31:25.290 [2024-07-24 22:19:04.311237] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:31:25.290 22:19:04 keyring_file -- common/autotest_common.sh@974 -- # wait 2895778 00:31:25.548 00:31:25.548 real 0m11.943s 00:31:25.548 user 0m27.485s 00:31:25.548 sys 0m3.422s 00:31:25.548 22:19:04 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:25.548 22:19:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:25.548 ************************************ 00:31:25.548 END TEST keyring_file 00:31:25.548 ************************************ 00:31:25.548 22:19:04 -- spdk/autotest.sh@300 -- # [[ y == y ]] 00:31:25.549 22:19:04 -- spdk/autotest.sh@301 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:31:25.549 22:19:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:25.549 22:19:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:25.549 22:19:04 -- common/autotest_common.sh@10 -- # set +x 00:31:25.549 ************************************ 00:31:25.549 START TEST keyring_linux 00:31:25.549 ************************************ 00:31:25.549 22:19:04 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:31:25.806 * Looking for test storage... 00:31:25.806 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:31:25.806 22:19:04 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:31:25.806 22:19:04 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:25.806 22:19:04 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:31:25.806 22:19:04 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:25.806 22:19:04 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:25.806 22:19:04 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:25.806 22:19:04 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:25.806 22:19:04 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:25.806 22:19:04 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:25.806 22:19:04 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:25.806 22:19:04 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:25.806 22:19:04 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:25.806 22:19:04 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:25.806 22:19:04 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:31:25.806 22:19:04 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:31:25.806 22:19:04 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:25.806 22:19:04 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:25.806 22:19:04 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:25.806 22:19:04 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:25.806 22:19:04 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:25.806 22:19:04 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:25.807 22:19:04 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:25.807 22:19:04 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:25.807 22:19:04 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.807 22:19:04 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.807 22:19:04 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.807 22:19:04 keyring_linux -- paths/export.sh@5 -- # export PATH 00:31:25.807 22:19:04 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.807 22:19:04 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:31:25.807 22:19:04 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:25.807 22:19:04 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:25.807 22:19:04 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:25.807 22:19:04 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:25.807 22:19:04 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:25.807 22:19:04 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:25.807 22:19:04 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:25.807 22:19:04 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:25.807 22:19:04 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:31:25.807 22:19:04 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:31:25.807 22:19:04 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:31:25.807 22:19:04 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:31:25.807 22:19:04 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:31:25.807 22:19:04 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:31:25.807 22:19:04 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:31:25.807 22:19:04 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:31:25.807 22:19:04 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:31:25.807 22:19:04 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:25.807 22:19:04 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:31:25.807 22:19:04 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:31:25.807 22:19:04 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:25.807 22:19:04 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:25.807 22:19:04 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:31:25.807 22:19:04 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:25.807 22:19:04 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:31:25.807 22:19:04 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:31:25.807 22:19:04 keyring_linux -- nvmf/common.sh@705 -- # python - 00:31:25.807 22:19:04 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:31:25.807 22:19:04 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:31:25.807 /tmp/:spdk-test:key0 00:31:25.807 22:19:04 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:31:25.807 22:19:04 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:31:25.807 22:19:04 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:31:25.807 22:19:04 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:31:25.807 22:19:04 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:31:25.807 22:19:04 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:31:25.807 22:19:04 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:31:25.807 22:19:04 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:31:25.807 22:19:04 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:31:25.807 22:19:04 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:25.807 22:19:04 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:31:25.807 22:19:04 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:31:25.807 22:19:04 keyring_linux -- nvmf/common.sh@705 -- # python - 00:31:25.807 22:19:04 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:31:25.807 22:19:04 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:31:25.807 /tmp/:spdk-test:key1 00:31:25.807 22:19:04 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2897868 00:31:25.807 22:19:04 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:31:25.807 22:19:04 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2897868 00:31:25.807 22:19:04 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 2897868 ']' 00:31:25.807 22:19:04 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:25.807 22:19:04 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:25.807 22:19:04 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:25.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:25.807 22:19:04 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:25.807 22:19:04 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:25.807 [2024-07-24 22:19:04.969821] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:31:25.807 [2024-07-24 22:19:04.969877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2897868 ] 00:31:25.807 EAL: No free 2048 kB hugepages reported on node 1 00:31:26.065 [2024-07-24 22:19:05.037774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:26.065 [2024-07-24 22:19:05.111015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:26.630 22:19:05 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:26.630 22:19:05 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:31:26.630 22:19:05 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:31:26.630 22:19:05 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.630 22:19:05 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:26.630 [2024-07-24 22:19:05.762404] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:26.630 null0 00:31:26.630 [2024-07-24 22:19:05.794461] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:26.630 [2024-07-24 22:19:05.794832] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:26.630 22:19:05 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.630 22:19:05 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:31:26.630 531230077 00:31:26.630 22:19:05 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:31:26.630 539719504 00:31:26.630 22:19:05 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2898099 00:31:26.630 22:19:05 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2898099 /var/tmp/bperf.sock 00:31:26.630 22:19:05 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:31:26.630 22:19:05 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 2898099 ']' 00:31:26.630 22:19:05 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:26.630 22:19:05 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:26.631 22:19:05 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:26.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:26.631 22:19:05 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:26.631 22:19:05 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:26.888 [2024-07-24 22:19:05.867289] Starting SPDK v24.09-pre git sha1 38b03952e / DPDK 24.03.0 initialization... 00:31:26.888 [2024-07-24 22:19:05.867340] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2898099 ] 00:31:26.888 EAL: No free 2048 kB hugepages reported on node 1 00:31:26.888 [2024-07-24 22:19:05.936427] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:26.888 [2024-07-24 22:19:06.010055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:27.817 22:19:06 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:27.817 22:19:06 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:31:27.817 22:19:06 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:31:27.817 22:19:06 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:31:27.817 22:19:06 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:31:27.817 22:19:06 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:28.075 22:19:07 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:31:28.075 22:19:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:31:28.075 [2024-07-24 22:19:07.213390] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:28.075 nvme0n1 00:31:28.332 22:19:07 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:31:28.332 22:19:07 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:31:28.332 22:19:07 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:31:28.332 22:19:07 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:31:28.332 22:19:07 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:31:28.332 22:19:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:28.332 22:19:07 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:31:28.332 22:19:07 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:31:28.332 22:19:07 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:31:28.332 22:19:07 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:31:28.332 22:19:07 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:31:28.332 22:19:07 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:28.332 22:19:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:28.588 22:19:07 keyring_linux -- keyring/linux.sh@25 -- # sn=531230077 00:31:28.588 22:19:07 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:31:28.588 22:19:07 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:31:28.588 22:19:07 keyring_linux -- keyring/linux.sh@26 -- # [[ 531230077 == \5\3\1\2\3\0\0\7\7 ]] 00:31:28.588 22:19:07 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 531230077 00:31:28.588 22:19:07 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:31:28.588 22:19:07 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:28.588 Running I/O for 1 seconds... 00:31:29.957 00:31:29.957 Latency(us) 00:31:29.957 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:29.958 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:29.958 nvme0n1 : 1.01 13926.53 54.40 0.00 0.00 9149.90 2555.90 12268.34 00:31:29.958 =================================================================================================================== 00:31:29.958 Total : 13926.53 54.40 0.00 0.00 9149.90 2555.90 12268.34 00:31:29.958 0 00:31:29.958 22:19:08 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:29.958 22:19:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:29.958 22:19:08 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:31:29.958 22:19:08 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:31:29.958 22:19:08 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:31:29.958 22:19:08 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:31:29.958 22:19:08 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:31:29.958 22:19:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:29.958 22:19:09 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:31:29.958 22:19:09 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:31:29.958 22:19:09 keyring_linux -- keyring/linux.sh@23 -- # return 00:31:29.958 22:19:09 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:29.958 22:19:09 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:31:29.958 22:19:09 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:29.958 22:19:09 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:31:29.958 22:19:09 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:29.958 22:19:09 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:31:29.958 22:19:09 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:29.958 22:19:09 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:29.958 22:19:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:30.215 [2024-07-24 22:19:09.297277] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:30.215 [2024-07-24 22:19:09.298008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x764750 (107): Transport endpoint is not connected 00:31:30.215 [2024-07-24 22:19:09.299002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x764750 (9): Bad file descriptor 00:31:30.215 [2024-07-24 22:19:09.300003] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:30.215 [2024-07-24 22:19:09.300017] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:31:30.215 [2024-07-24 22:19:09.300027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:30.215 request: 00:31:30.215 { 00:31:30.215 "name": "nvme0", 00:31:30.215 "trtype": "tcp", 00:31:30.215 "traddr": "127.0.0.1", 00:31:30.215 "adrfam": "ipv4", 00:31:30.215 "trsvcid": "4420", 00:31:30.215 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:30.215 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:30.215 "prchk_reftag": false, 00:31:30.215 "prchk_guard": false, 00:31:30.215 "hdgst": false, 00:31:30.215 "ddgst": false, 00:31:30.215 "psk": ":spdk-test:key1", 00:31:30.215 "method": "bdev_nvme_attach_controller", 00:31:30.215 "req_id": 1 00:31:30.215 } 00:31:30.215 Got JSON-RPC error response 00:31:30.215 response: 00:31:30.215 { 00:31:30.215 "code": -5, 00:31:30.215 "message": "Input/output error" 00:31:30.215 } 00:31:30.215 22:19:09 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:31:30.215 22:19:09 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:30.215 22:19:09 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:30.215 22:19:09 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:30.215 22:19:09 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:31:30.215 22:19:09 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:31:30.215 22:19:09 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:31:30.215 22:19:09 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:31:30.215 22:19:09 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:31:30.215 22:19:09 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:31:30.215 22:19:09 keyring_linux -- keyring/linux.sh@33 -- # sn=531230077 00:31:30.215 22:19:09 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 531230077 00:31:30.215 1 links removed 00:31:30.215 22:19:09 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:31:30.215 22:19:09 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:31:30.215 22:19:09 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:31:30.215 22:19:09 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:31:30.215 22:19:09 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:31:30.215 22:19:09 keyring_linux -- keyring/linux.sh@33 -- # sn=539719504 00:31:30.215 22:19:09 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 539719504 00:31:30.215 1 links removed 00:31:30.215 22:19:09 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2898099 00:31:30.215 22:19:09 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 2898099 ']' 00:31:30.215 22:19:09 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 2898099 00:31:30.215 22:19:09 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:31:30.215 22:19:09 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:30.215 22:19:09 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2898099 00:31:30.215 22:19:09 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:30.215 22:19:09 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:30.215 22:19:09 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2898099' 00:31:30.215 killing process with pid 2898099 00:31:30.215 22:19:09 keyring_linux -- common/autotest_common.sh@969 -- # kill 2898099 00:31:30.215 Received shutdown signal, test time was about 1.000000 seconds 00:31:30.215 00:31:30.215 Latency(us) 00:31:30.215 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:30.215 =================================================================================================================== 00:31:30.215 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:30.215 22:19:09 keyring_linux -- common/autotest_common.sh@974 -- # wait 2898099 00:31:30.472 22:19:09 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2897868 00:31:30.472 22:19:09 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 2897868 ']' 00:31:30.472 22:19:09 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 2897868 00:31:30.472 22:19:09 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:31:30.472 22:19:09 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:30.472 22:19:09 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2897868 00:31:30.472 22:19:09 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:30.472 22:19:09 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:30.472 22:19:09 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2897868' 00:31:30.472 killing process with pid 2897868 00:31:30.472 22:19:09 keyring_linux -- common/autotest_common.sh@969 -- # kill 2897868 00:31:30.472 22:19:09 keyring_linux -- common/autotest_common.sh@974 -- # wait 2897868 00:31:30.729 00:31:30.729 real 0m5.240s 00:31:30.729 user 0m8.931s 00:31:30.729 sys 0m1.714s 00:31:30.729 22:19:09 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:30.729 22:19:09 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:30.729 ************************************ 00:31:30.729 END TEST keyring_linux 00:31:30.729 ************************************ 00:31:30.986 22:19:09 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:31:30.986 22:19:09 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:31:30.986 22:19:09 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:31:30.986 22:19:09 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:31:30.986 22:19:09 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:31:30.986 22:19:09 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:31:30.986 22:19:09 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:31:30.986 22:19:09 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:31:30.986 22:19:09 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:31:30.986 22:19:09 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:31:30.986 22:19:09 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:31:30.986 22:19:09 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:31:30.986 22:19:09 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:31:30.986 22:19:09 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:31:30.986 22:19:09 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:31:30.986 22:19:09 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:31:30.986 22:19:09 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:31:30.986 22:19:09 -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:30.986 22:19:09 -- common/autotest_common.sh@10 -- # set +x 00:31:30.986 22:19:09 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:31:30.986 22:19:09 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:31:30.986 22:19:09 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:31:30.986 22:19:09 -- common/autotest_common.sh@10 -- # set +x 00:31:37.552 INFO: APP EXITING 00:31:37.552 INFO: killing all VMs 00:31:37.552 INFO: killing vhost app 00:31:37.552 INFO: EXIT DONE 00:31:40.081 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:31:40.081 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:31:40.081 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:31:40.081 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:31:40.081 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:31:40.081 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:31:40.081 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:31:40.081 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:31:40.081 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:31:40.081 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:31:40.081 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:31:40.081 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:31:40.081 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:31:40.081 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:31:40.081 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:31:40.081 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:31:40.081 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:31:43.360 Cleaning 00:31:43.360 Removing: /var/run/dpdk/spdk0/config 00:31:43.360 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:31:43.360 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:31:43.360 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:31:43.360 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:31:43.360 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:31:43.360 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:31:43.360 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:31:43.360 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:31:43.360 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:31:43.360 Removing: /var/run/dpdk/spdk0/hugepage_info 00:31:43.360 Removing: /var/run/dpdk/spdk1/config 00:31:43.360 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:31:43.360 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:31:43.360 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:31:43.360 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:31:43.360 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:31:43.360 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:31:43.360 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:31:43.360 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:31:43.360 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:31:43.360 Removing: /var/run/dpdk/spdk1/hugepage_info 00:31:43.360 Removing: /var/run/dpdk/spdk1/mp_socket 00:31:43.360 Removing: /var/run/dpdk/spdk2/config 00:31:43.360 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:31:43.360 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:31:43.360 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:31:43.360 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:31:43.360 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:31:43.360 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:31:43.360 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:31:43.360 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:31:43.360 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:31:43.360 Removing: /var/run/dpdk/spdk2/hugepage_info 00:31:43.360 Removing: /var/run/dpdk/spdk3/config 00:31:43.618 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:31:43.618 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:31:43.618 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:31:43.618 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:31:43.618 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:31:43.618 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:31:43.618 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:31:43.618 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:31:43.618 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:31:43.618 Removing: /var/run/dpdk/spdk3/hugepage_info 00:31:43.618 Removing: /var/run/dpdk/spdk4/config 00:31:43.619 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:31:43.619 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:31:43.619 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:31:43.619 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:31:43.619 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:31:43.619 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:31:43.619 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:31:43.619 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:31:43.619 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:31:43.619 Removing: /var/run/dpdk/spdk4/hugepage_info 00:31:43.619 Removing: /dev/shm/bdev_svc_trace.1 00:31:43.619 Removing: /dev/shm/nvmf_trace.0 00:31:43.619 Removing: /dev/shm/spdk_tgt_trace.pid2498076 00:31:43.619 Removing: /var/run/dpdk/spdk0 00:31:43.619 Removing: /var/run/dpdk/spdk1 00:31:43.619 Removing: /var/run/dpdk/spdk2 00:31:43.619 Removing: /var/run/dpdk/spdk3 00:31:43.619 Removing: /var/run/dpdk/spdk4 00:31:43.619 Removing: /var/run/dpdk/spdk_pid2495618 00:31:43.619 Removing: /var/run/dpdk/spdk_pid2496872 00:31:43.619 Removing: /var/run/dpdk/spdk_pid2498076 00:31:43.619 Removing: /var/run/dpdk/spdk_pid2498775 00:31:43.619 Removing: /var/run/dpdk/spdk_pid2499648 00:31:43.619 Removing: /var/run/dpdk/spdk_pid2499882 00:31:43.619 Removing: /var/run/dpdk/spdk_pid2500984 00:31:43.619 Removing: /var/run/dpdk/spdk_pid2501119 00:31:43.619 Removing: /var/run/dpdk/spdk_pid2501369 00:31:43.619 Removing: /var/run/dpdk/spdk_pid2503082 00:31:43.619 Removing: /var/run/dpdk/spdk_pid2504518 00:31:43.619 Removing: /var/run/dpdk/spdk_pid2504827 00:31:43.619 Removing: /var/run/dpdk/spdk_pid2505151 00:31:43.619 Removing: /var/run/dpdk/spdk_pid2505484 00:31:43.619 Removing: /var/run/dpdk/spdk_pid2505810 00:31:43.619 Removing: /var/run/dpdk/spdk_pid2506091 00:31:43.619 Removing: /var/run/dpdk/spdk_pid2506312 00:31:43.619 Removing: /var/run/dpdk/spdk_pid2506554 00:31:43.619 Removing: /var/run/dpdk/spdk_pid2507281 00:31:43.619 Removing: /var/run/dpdk/spdk_pid2510406 00:31:43.619 Removing: /var/run/dpdk/spdk_pid2510712 00:31:43.619 Removing: /var/run/dpdk/spdk_pid2511006 00:31:43.619 Removing: /var/run/dpdk/spdk_pid2511270 00:31:43.619 Removing: /var/run/dpdk/spdk_pid2511825 00:31:43.619 Removing: /var/run/dpdk/spdk_pid2511841 00:31:43.619 Removing: /var/run/dpdk/spdk_pid2512402 00:31:43.619 Removing: /var/run/dpdk/spdk_pid2512659 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2512964 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2512978 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2513265 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2513472 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2513902 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2514185 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2514503 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2518441 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2523160 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2533814 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2534563 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2539088 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2539368 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2543889 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2549999 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2552728 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2563814 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2573783 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2575544 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2576596 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2594501 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2598789 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2645428 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2651046 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2657198 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2663444 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2663454 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2664457 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2665279 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2666089 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2666840 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2666862 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2667159 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2667211 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2667234 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2668731 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2669527 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2670370 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2671002 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2671107 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2671370 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2672520 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2673635 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2682247 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2707881 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2712661 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2714247 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2716118 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2716366 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2716637 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2716901 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2717484 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2719334 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2720450 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2721014 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2723157 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2723930 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2724550 00:31:43.877 Removing: /var/run/dpdk/spdk_pid2728845 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2739514 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2743767 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2750642 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2752109 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2753630 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2758208 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2762448 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2770409 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2770417 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2775193 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2775455 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2775716 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2776107 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2776241 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2781037 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2781677 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2786293 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2789086 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2795286 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2801019 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2809915 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2817142 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2817144 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2836223 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2836784 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2837566 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2838142 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2839187 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2839754 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2840510 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2841632 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2846137 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2846407 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2852532 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2852792 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2855076 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2863308 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2863313 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2868803 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2870836 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2872827 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2874019 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2876018 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2877234 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2887143 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2887668 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2888191 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2890643 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2891174 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2891700 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2895778 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2895795 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2897451 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2897868 00:31:44.163 Removing: /var/run/dpdk/spdk_pid2898099 00:31:44.163 Clean 00:31:44.430 22:19:23 -- common/autotest_common.sh@1451 -- # return 0 00:31:44.430 22:19:23 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:31:44.430 22:19:23 -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:44.430 22:19:23 -- common/autotest_common.sh@10 -- # set +x 00:31:44.430 22:19:23 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:31:44.430 22:19:23 -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:44.430 22:19:23 -- common/autotest_common.sh@10 -- # set +x 00:31:44.430 22:19:23 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:31:44.430 22:19:23 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:31:44.430 22:19:23 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:31:44.430 22:19:23 -- spdk/autotest.sh@395 -- # hash lcov 00:31:44.430 22:19:23 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:31:44.430 22:19:23 -- spdk/autotest.sh@397 -- # hostname 00:31:44.430 22:19:23 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-22 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:31:44.686 geninfo: WARNING: invalid characters removed from testname! 00:32:06.603 22:19:43 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:07.171 22:19:46 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:09.073 22:19:47 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:10.974 22:19:49 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:12.350 22:19:51 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:14.259 22:19:53 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:15.633 22:19:54 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:32:15.633 22:19:54 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:15.633 22:19:54 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:32:15.633 22:19:54 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:15.633 22:19:54 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:15.633 22:19:54 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.633 22:19:54 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.633 22:19:54 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.633 22:19:54 -- paths/export.sh@5 -- $ export PATH 00:32:15.633 22:19:54 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.633 22:19:54 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:32:15.633 22:19:54 -- common/autobuild_common.sh@447 -- $ date +%s 00:32:15.633 22:19:54 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721852394.XXXXXX 00:32:15.891 22:19:54 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721852394.ObssyJ 00:32:15.891 22:19:54 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:32:15.891 22:19:54 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:32:15.891 22:19:54 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:32:15.891 22:19:54 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:32:15.891 22:19:54 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:32:15.891 22:19:54 -- common/autobuild_common.sh@463 -- $ get_config_params 00:32:15.891 22:19:54 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:32:15.891 22:19:54 -- common/autotest_common.sh@10 -- $ set +x 00:32:15.891 22:19:54 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:32:15.891 22:19:54 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:32:15.891 22:19:54 -- pm/common@17 -- $ local monitor 00:32:15.891 22:19:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:15.891 22:19:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:15.891 22:19:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:15.891 22:19:54 -- pm/common@21 -- $ date +%s 00:32:15.891 22:19:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:15.891 22:19:54 -- pm/common@21 -- $ date +%s 00:32:15.891 22:19:54 -- pm/common@21 -- $ date +%s 00:32:15.891 22:19:54 -- pm/common@25 -- $ sleep 1 00:32:15.891 22:19:54 -- pm/common@21 -- $ date +%s 00:32:15.891 22:19:54 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721852394 00:32:15.891 22:19:54 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721852394 00:32:15.891 22:19:54 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721852394 00:32:15.891 22:19:54 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721852394 00:32:15.892 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721852394_collect-cpu-temp.pm.log 00:32:15.892 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721852394_collect-vmstat.pm.log 00:32:15.892 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721852394_collect-cpu-load.pm.log 00:32:15.892 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721852394_collect-bmc-pm.bmc.pm.log 00:32:16.827 22:19:55 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:32:16.827 22:19:55 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:32:16.827 22:19:55 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:16.827 22:19:55 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:32:16.827 22:19:55 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:32:16.827 22:19:55 -- spdk/autopackage.sh@19 -- $ timing_finish 00:32:16.827 22:19:55 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:32:16.827 22:19:55 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:32:16.827 22:19:55 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:16.827 22:19:55 -- spdk/autopackage.sh@20 -- $ exit 0 00:32:16.827 22:19:55 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:32:16.827 22:19:55 -- pm/common@29 -- $ signal_monitor_resources TERM 00:32:16.827 22:19:55 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:32:16.827 22:19:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:16.827 22:19:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:32:16.827 22:19:55 -- pm/common@44 -- $ pid=2909057 00:32:16.827 22:19:55 -- pm/common@50 -- $ kill -TERM 2909057 00:32:16.827 22:19:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:16.827 22:19:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:32:16.827 22:19:55 -- pm/common@44 -- $ pid=2909059 00:32:16.827 22:19:55 -- pm/common@50 -- $ kill -TERM 2909059 00:32:16.827 22:19:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:16.827 22:19:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:32:16.827 22:19:55 -- pm/common@44 -- $ pid=2909061 00:32:16.827 22:19:55 -- pm/common@50 -- $ kill -TERM 2909061 00:32:16.827 22:19:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:16.827 22:19:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:32:16.827 22:19:55 -- pm/common@44 -- $ pid=2909086 00:32:16.827 22:19:55 -- pm/common@50 -- $ sudo -E kill -TERM 2909086 00:32:16.827 + [[ -n 2386299 ]] 00:32:16.827 + sudo kill 2386299 00:32:16.836 [Pipeline] } 00:32:16.854 [Pipeline] // stage 00:32:16.860 [Pipeline] } 00:32:16.876 [Pipeline] // timeout 00:32:16.881 [Pipeline] } 00:32:16.898 [Pipeline] // catchError 00:32:16.903 [Pipeline] } 00:32:16.921 [Pipeline] // wrap 00:32:16.927 [Pipeline] } 00:32:16.941 [Pipeline] // catchError 00:32:16.950 [Pipeline] stage 00:32:16.953 [Pipeline] { (Epilogue) 00:32:16.967 [Pipeline] catchError 00:32:16.968 [Pipeline] { 00:32:16.983 [Pipeline] echo 00:32:16.985 Cleanup processes 00:32:16.991 [Pipeline] sh 00:32:17.329 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:17.329 2909178 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:32:17.329 2909510 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:17.343 [Pipeline] sh 00:32:17.629 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:17.629 ++ grep -v 'sudo pgrep' 00:32:17.629 ++ awk '{print $1}' 00:32:17.629 + sudo kill -9 2909178 00:32:17.641 [Pipeline] sh 00:32:17.925 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:17.925 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:32:22.120 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:32:26.328 [Pipeline] sh 00:32:26.613 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:26.614 Artifacts sizes are good 00:32:26.629 [Pipeline] archiveArtifacts 00:32:26.637 Archiving artifacts 00:32:26.818 [Pipeline] sh 00:32:27.104 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:32:27.118 [Pipeline] cleanWs 00:32:27.127 [WS-CLEANUP] Deleting project workspace... 00:32:27.127 [WS-CLEANUP] Deferred wipeout is used... 00:32:27.133 [WS-CLEANUP] done 00:32:27.135 [Pipeline] } 00:32:27.148 [Pipeline] // catchError 00:32:27.158 [Pipeline] sh 00:32:27.440 + logger -p user.info -t JENKINS-CI 00:32:27.449 [Pipeline] } 00:32:27.465 [Pipeline] // stage 00:32:27.471 [Pipeline] } 00:32:27.489 [Pipeline] // node 00:32:27.496 [Pipeline] End of Pipeline 00:32:27.531 Finished: SUCCESS